Reflectometry: Fit honeycomb lattice

In this example, we want to demonstrate how to fit a more complex sample. For this purpose, we utilize the reflectometry data of an artificial magnetic honeycomb lattice published by A. Glavic et al., in this paper

The experiment was performed with polarized neutrons, but without polarization analysis. Since the magnetization of the sample was parallel to the neutron spin, there is no spin flip and we apply the scalar theory to this problem. This is primarily done to speed up computations: when the polarized computational engine is utilized the fitting procedure takes roughly three times as long.

Experimental data

The experimental data consists of four datasets that should be fitted simultaneously. These datasets arise from the two polarization channels for up and down polarization of the incoming beam and both of these channels are measured at two temperatures (300K and 150K).

All of this is measured on the same sample, so all parameters are assumed to be the same, except the magnetization being temperature dependent. Therefore, we introduce a scaling parameter for the magnetization as the ratio of the magnetizations at 150K and 300K: $M_{s150} = M_{150K} / M_{300K}$.

Magnetization model

To model a magnetic material, one can assign a magnetization vector to any material, as is demonstrated in the magnetic material tutorial. When a non-vanishing magnetization vector is specified for at least one layer in a sample, BornAgain will automatically utilize the polarized computational engine. This leads to lower performance as the computations are more invovled.

In this example, the magnetization is (anti)parallel to the neutron spin and hence we instead parametrize the magnetic layers with an effective SLD that is the sum/difference of the nuclear and their magnetic SLD:

$$\rho_\pm = \rho_{\text{N}} \pm \rho_{\text{M}}$$

Here the $+$ is chosen for incoming neutrons with spin up and $-$ is chosen for spin down neutrons.

Computational model

We simulate this experiment by bulding a 6 layer model: As usual the top layer is the vacuum and the bottom layer is a silicon substrate. On top of the silicon substrate, we simulate a thin oxide layer, where we fit its roughness and thickness The SLDs of these three layers are taken from the literature and kept constant.

Then we model the lattice structure with a three-layer model: two layers to account for density fluctuations in $z$-direction and another oxide layer on top. This lattice structure is assumed to be magnetic and we fit all of their SLDs, magnetic SLDs, thicknesses and roughnesses. The magnetic SLD depends on the temperature of the dataset, according to the scaling described above, where the $M_{s150}$ parameter is fitted.

All layers are modeled without absorption, i.e. no imaginary part of the SLD. Ferthermore, we apply a resolution correction as described in this tutorial with a fixed value for $\Delta Q / Q = 0.018$. The experimental data is normalized to unity, but we still fit the intensity, as is necessary due to the resolution correction.

Running a computation

In order to run a computation, we define some functions to utilize a common simulation function for the two spin channels and both temperatures:

def run_Simulation_300_p( qzs, params ):
    return run_simulation(qzs, params, sign=1)

def run_Simulation_300_m( qzs, params ):
    return run_simulation(qzs, params, sign=-1)

def run_Simulation_150_p( qzs, params ):
    return run_simulation(qzs, params, sign=1, ms150=True)

def run_Simulation_150_m( qzs, params ):
    return run_simulation(qzs, params, sign=-1, ms150=True)

Here, the given arguments specify whether we want to simulate a spin-up beam (sign = 1) and whether the scaling of the magnetization should be applied (ms150=True). For the latter, true means that a dataset at 150K is simulated while false corresponds to 300K and the scaling parameter is set to unity.

All four reflectivity curves are then computed using:

q_300_p, r_300_p = qr( run_Simulation_300_p( qzs, paramsInitial ) )
q_300_m, r_300_m = qr( run_Simulation_300_m( qzs, paramsInitial ) )

q_150_p, r_150_p = qr( run_Simulation_150_p( qzs, paramsInitial ) )
q_150_m, r_150_m = qr( run_Simulation_150_m( qzs, paramsInitial ) )

We choose some sensible initial parameters and these yield the following simulation result

Reflectivity with the initial parameters

SLD profile with the initial parameters

We have chosen the initial magnetization to be zero, hence there is only a single SLD curve for both spin directions.

Fitting

We fit this example by utilizing the differential evolution algorithm from Scipy. As a measure for the goodness of the fit, we use the relative difference:

$$\Delta = \sum_{j = 1}^4 \frac{1}{N_j} \sum_{i = 1}^N \left( \frac{d_{ji} - s_{ji}}{d_{ji} + s_{ji}} \right)^2$$

Here the sum over $i$ sums up the fitting error at every data point as usual and the sum over $j$ adds the contributions from all four datasets. This is implemented in the FitObjective::__call__ function and the FitObjective object holds all four datasets, and also performs the corresponding simulations. In the function run_fit_differential_evolution this is dispatched to the differential evolution algorithm.

The given uncercainty of the experimental data is not taken into account.

Fit Result

As usual, the fit can be run with the following command:

python3 Honeycomb_fit.py fit

On a four-core workstation, the fitting procedure takes roughly 45 minutes to complete and we obtain the following result:

Reflectivity with the fit result

SLD profile with the fit result

As can be seen from the plot of the SLDs, the magnetization is indeed larger for the measurement at lower temperature, exactly as expected.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
#!/usr/bin/env python3
"""
This example demonstrates how to fit a complex experimental setup using BornAgain.
It is based on real data published in  https://doi.org/10.1002/advs.201700856
by A. Glavic et al.
In this example we utilize the scalar reflectometry engine to fit polarized
data without spin-flip for performance reasons.
"""

import bornagain as ba, numpy as np, os, matplotlib.pyplot as plt, scipy
from bornagain import angstrom, sample_tools as st

datadir = os.getenv('BA_DATA_DIR', '')

####################################################################
#  Sample and simulation model
####################################################################

def get_sample(P, sign, T):

    if T < 200:
        ms150 = P["ms150"]
    else:
        ms150 = 1

    material_Air = ba.MaterialBySLD("Air", 0, 0)
    material_PyOx = ba.MaterialBySLD("PyOx",
                               (P["sld_PyOx_real"] + \
                                 sign * ms150 * P["msld_PyOx"] )* 1e-6,
                               P["sld_PyOx_imag"] * 1e-6)
    material_Py2 = ba.MaterialBySLD("Py2",
                               ( P["sld_Py2_real"] + \
                                 sign * ms150 * P["msld_Py2"] ) * 1e-6,
                               P["sld_Py2_imag"] * 1e-6)
    material_Py1 = ba.MaterialBySLD("Py1",
                               ( P["sld_Py1_real"] + \
                                 sign * ms150 * P["msld_Py1"] ) * 1e-6,
                               P["sld_Py1_imag"] * 1e-6)
    material_SiO2 = ba.MaterialBySLD("SiO2", P["sld_SiO2_real"]*1e-6,
                                     P["sld_SiO2_imag"]*1e-6)
    material_Si = ba.MaterialBySLD("Substrate", P["sld_Si_real"]*1e-6,
                                   P["sld_Si_imag"]*1e-6)

    l_Air = ba.Layer(material_Air)
    l_PyOx = ba.Layer(material_PyOx, P["t_PyOx"]*angstrom)
    l_Py2 = ba.Layer(material_Py2, P["t_Py2"]*angstrom)
    l_Py1 = ba.Layer(material_Py1, P["t_Py1"]*angstrom)
    l_SiO2 = ba.Layer(material_SiO2, P["t_SiO2"]*angstrom)
    l_Si = ba.Layer(material_Si)

    rPyOx = ba.LayerRoughness(P["rPyOx"]*angstrom)
    rPy2 = ba.LayerRoughness(P["rPy2"]*angstrom)
    rPy1 = ba.LayerRoughness(P["rPy1"]*angstrom)
    rSiO2 = ba.LayerRoughness(P["rSiO2"]*angstrom)
    rSi = ba.LayerRoughness(P["rSi"]*angstrom)

    sample = ba.MultiLayer()

    sample.addLayer(l_Air)
    sample.addLayerWithTopRoughness(l_PyOx, rPyOx)
    sample.addLayerWithTopRoughness(l_Py2, rPy2)
    sample.addLayerWithTopRoughness(l_Py1, rPy1)
    sample.addLayerWithTopRoughness(l_SiO2, rSiO2)
    sample.addLayerWithTopRoughness(l_Si, rSi)

    sample.setRoughnessModel(ba.RoughnessModel.NEVOT_CROCE)

    return sample


def run_simulation(qaxis, P, *, sign, T):

    qdistr = ba.DistributionGaussian(0., 1., 25, 3.)

    dq = P["dq"]*qaxis
    scan = ba.QzScan(qaxis)
    scan.setVectorResolution(qdistr, dq)
    scan.setIntensity(P["intensity"])

    sample = get_sample(P, sign, T)

    simulation = ba.SpecularSimulation(scan, sample)
    simulation.setBackground(ba.ConstantBackground(5e-7))

    return simulation.simulate().npArray()

####################################################################
#  Experimental data
####################################################################

def load_data(fname, qmin, qmax):
    fpath = os.path.join(datadir, fname)
    flags = ba.ImportSettings1D("q (1/angstrom)", "#", "", 1, 3, 4, 5)
    data = ba.readData1D(fpath, ba.csv1D, flags)
    data = data.normalizedToMax()
    return data.crop(qmin, qmax)

####################################################################
#  Plotting
####################################################################

def plot(q, rs, data, shifts, labels):
    """
    Plot the simulated result together with the experimental data.
    """
    fig = plt.figure()
    ax = fig.add_subplot(111)

    for r, exp, shift, l in zip(rs, data, shifts, labels):

        ax.errorbar(exp.npXcenters(),
                    exp.npArray()/shift,
                    yerr=exp.npErrors()/shift,
                    fmt='.',
                    markersize=0.75,
                    linewidth=0.5)

        ax.plot(q, r/shift, label=l)

    ax.set_yscale('log')
    plt.legend()

    plt.xlabel(r"$q\; $(nm$^{-1}$)")
    plt.ylabel("$R$")

    plt.tight_layout()
    #plt.close()


def plot_sld_profile(P):

    z_300p, sld_300p = st.materialProfile(get_sample(P, +1, 300))
    z_300m, sld_300m = st.materialProfile(get_sample(P, -1, 300))
    z_150p, sld_150p = st.materialProfile(get_sample(P, +1, 150))
    z_150m, sld_150m = st.materialProfile(get_sample(P, -1, 150))

    plt.figure()
    plt.plot(z_300p, np.real(sld_300p)*1e6, label=r"300K $+$")
    plt.plot(z_300m, np.real(sld_300m)*1e6, label=r"300K $-$")
    plt.plot(z_150p, np.real(sld_150p)*1e6, label=r"150K $+$")
    plt.plot(z_150m, np.real(sld_150m)*1e6, label=r"150K $-$")

    plt.xlabel(r"$z\;$(Å)")
    plt.ylabel(r"$\delta(z) \cdot 10^6$")

    plt.legend()
    plt.tight_layout()
    #plt.close()

####################################################################
#  Main
####################################################################

if __name__ == '__main__':

    # Parameters and bounds.

    # We start with rather good values so that the example takes not too much time
    startPnB = {
        "intensity": (0.5, 0.4, 0.6),
        "t_PyOx": (77, 60, 100),
        "t_Py2": (56, 46, 66),
        "t_Py1": (56, 46, 66),
        "t_SiO2": (22, 15, 29),
    }

    # For fixed parameters, bounds are ignored. We leave them here just
    # to facilitate moving entries between startPnB and fixedPnB.
    fixedPnB = {
        "sld_PyOx_imag": (0, 0, 0),
        "sld_Py2_imag": (0, 0, 0),
        "sld_Py1_imag": (0, 0, 0),
        "sld_SiO2_imag": (0, 0, 0),
        "sld_Si_imag": (0, 0, 0),
        "sld_SiO2_real": (3.47, 3, 4),
        "sld_Si_real": (2.0704, 2, 3),
        "dq": (0.018, 0, 0.1),
    # Start by moving the following back to startPnB:
        "sld_PyOx_real": (1.995, 1.92, 2.07),
        "sld_Py2_real": (5, 4.7, 5.3),
        "sld_Py1_real": (4.62, 4.32, 4.92),
        "rPyOx": (27, 15, 35),
        "rPy2": (12, 2, 20),
        "rPy1": (12, 2, 20),
        "rSiO2": (15, 5, 25),
        "rSi": (15, 5, 25),
        "msld_PyOx": (0.25, 0, 1),
        "msld_Py2": (0.63, 0, 1),
        "msld_Py1": (0.64, 0, 1),
        "ms150": (1.05, 1.0, 1.1),
    }

    fixedP = {d: v[0] for d, v in fixedPnB.items()}
    P = {d: v[0] for d, v in startPnB.items()} | fixedP
    bounds = [(par[1], par[2]) for par in startPnB.values()]
    freeParNames = [name for name in startPnB.keys()]

    # Restrict the q range for fitting and plotting
    qmin = 0.08
    qmax = 1.4

    data = [
        load_data("specular/honeycomb300p.dat", qmin, qmax),
        load_data("specular/honeycomb300m.dat", qmin, qmax),
        load_data("specular/honeycomb150p.dat", qmin, qmax),
        load_data("specular/honeycomb150m.dat", qmin, qmax)]

    simFunctions = [
        lambda q, P: run_simulation(q, P, sign=+1, T=300),
        lambda q, P: run_simulation(q, P, sign=-1, T=300),
        lambda q, P: run_simulation(q, P, sign=+1, T=150),
        lambda q, P: run_simulation(q, P, sign=-1, T=150)]

    qzs = np.linspace(qmin, qmax, 1500) # x-axis for plot R vs q

    # Plot data with initial model

    simResults = [ f(qzs, P) for f in simFunctions ]
    plot(qzs, simResults, data, [1, 1, 10, 10],
         ["300K $+$", "300K $-$", "150K $+$", "150K $-$"])
    plot_sld_profile(P)

    # Fit

    qaxes = [d.npXcenters() for d in data]
    rdata = [d.npArray() for d in data]

    def par_dict(*args):
        return {name: value for name, value in zip(freeParNames, *args)} | fixedP

    def objective_function(*args):
        """
        Returns fit objective, i.e. sum of weighted squared relative differences.
        """
        fullP = par_dict(*args)
        result = 0
        for q, r, sim_fct in zip(qaxes, rdata, simFunctions):
            t = sim_fct(q, fullP)
            reldiff = (r - t) / (r + t)
            result += np.sum(reldiff**2/len(t))
        return result

    result = scipy.optimize.differential_evolution(
        objective_function,
        bounds,
        maxiter=5, # for a serious DE fit, choose 500
        popsize=3, # for a serious DE fit, choose 10
        tol=1e-2,
        mutation=(0.5, 1.5),
        seed=0,
        disp=True,
        polish=True
    )

    print(f"Final chi2: {result.fun}")
    print("Fit Result:")
    for name, value in zip(freeParNames, result.x):
        print(f'   {name} = {value}')

    # Plot data with fit result

    P = par_dict(result.x)

    sim_results = [ f(qzs, P) for f in simFunctions ]
    plot(qzs, simResults, data, [1, 1, 10, 10],
         ["300K $+$", "300K $-$", "150K $+$", "150K $-$"])
    plot_sld_profile(P)

    plt.show()
auto/Examples/fit/specular/Honeycomb_fit.py

Data to be fitted: honeycomb150m.dat , honeycomb150p.dat , honeycomb300m.dat , honeycomb300p.dat