Crystallography
2. experimental details, 4. conclusion and outlook.
Format | BIBTeX | |
EndNote | ||
RefMan | ||
Refer | ||
Medline | ||
CIF | ||
SGML | ||
Plain Text | ||
Text | ||
JOURNAL OF APPLIED CRYSTALLOGRAPHY |
a Division of Materials Physics, Department of Physics and Astronomy, Uppsala University, Box 516, 75 121 Uppsala, Sweden * Correspondence e-mail: [email protected]
This article demonstrates the feasibility of obtaining accurate pair distribution functions of thin amorphous films down to 80 nm, using modern laboratory-based X-ray sources. The pair distribution functions are obtained using a single diffraction scan without the requirement of additional scans of the substrate or of the air. By using a crystalline substrate combined with an oblique scattering geometry, most of the Bragg scattering of the substrate is avoided, rendering the substrate Compton scattering the primary contribution. By utilizing a discriminating energy filter, available in the latest generation of modern detectors, it is demonstrated that the Compton intensity can further be reduced to negligible levels at higher wavevector values. Scattering from the sample holder and the air is minimized by the systematic selection of pixels in the detector image based on the projected detection footprint of the sample and the use of a 3D-printed sample holder. Finally, X-ray optical effects in the absorption factors and the ratios between the Compton intensity of the substrate and film are taken into account by using a theoretical tool that simulates the electric field inside the film and the substrate, which aids in planning both the sample design and the measurement protocol.
Keywords: pair distribution functions ; thin films ; laboratory X-ray tubes .
Many thin film systems can exhibit exotic phases, whose thermal stability relies on the adhesion to the substrate and/or the finite size of the system itself, meaning that they are hard or impossible to synthesize in the bulk. Thin film techniques, owing to the fast quenching rate, allow access to a wider amorphous composition range than is achievable with standard bulk techniques. Therefore, it is of great interest to be able to study the structural aspects of these thin films, for which obtaining their pair distribution function (PDF) is key. Thin film PDFs (tPDF) are also important in the sense that these can serve as an initial characterization and a starting point for more advanced studies at synchrotron facilities, which may lower the load on those instruments.
A schematic illustration of the GI setup involving a fixed incidence angle ω, source slit and a variable detector angle β. The detector slit (β) can be adjusted to maintain constant sample area detection. The color highlighted region represents the β-dependent air scattering volume. |
Hence, in this work, we investigate the necessary measurement conditions and outline the procedures required to obtain accurate PDFs from sub-micrometre thin metallic glass films down to at least 80 nm-thick films, with laboratory-based X-ray sources. Utilizing the recent advancements in X-ray optics and modern detectors, together with innovations in data analysis and sample design, the procedure becomes eminently straightforward. We demonstrate that it is possible to suppress, or even eliminate, the coherent substrate signal by utilizing a crystalline substrate. By orienting the substrate in a manner that avoids any Bragg reflections, the film signal can be isolated at the measurement step without requiring post-process substrate reduction techniques. Lastly, we examine the ultimate film thickness limit in which reliable amorphous PDFs can be determined in a laboratory environment before the scattered signal from the film is overshadowed by background noise. We have assessed existing practices and techniques of GI diffraction and developed new ones, which are collected and summarized in this work.
2.2. sample model, 2.3. instrument description, 3.1. instrument and sample considerations.
where I f,coh (2 θ ) + I f,com (2 θ ) is the coherent and incoherent (Compton) scattered intensity from the film, I s,coh (2 θ ) + I s,com (2 θ ) is the coherent and Compton scattering from the substrate, I air (2 θ ) is the intensity from air scattering, I holder (2 θ ) is any stray contribution from the instrument or sample holder, I dark represents dark counts, and I fluorescence (2 θ ) is the fluorescent intensity. We have neglected small-angle scattering and multiple scattering as these are assumed to be small in thin films. These effects may be included using standard formulas and techniques. For the other contributions, due to the GI geometry, the illuminated area is fixed by the choice of incidence angle, source slit, mirror and beam divergence. Preferably, one would like to set an incidence angle which maximizes the thin film signal while suppressing the influence of the substrate, sample holder and air.
To minimize the influence of film and substrate fluorescence we used the 1Der detector from Malvern Panalytical, with electronic photon energy filtering windows as narrow as 340 eV (Lynxeye XET from Bruker is another choice). Using modern electronic energy discrimination in this way also avoids the need for using Ni (for Cu K α ) or Zr (for Mo K α ) filters to further reduce the β radiation. Furthermore, the discrimination replaces a secondary monochromator, which significantly improves the measured count rate. The energy filter also eliminates much of the Bremsstrahlung .
Since thin films are not always deposited on large wafers but quite often on square-shaped substrates of either 10 × 10 mm or 20 × 20 mm size, it is important, given the small angles involved in GI, to minimize any spurious scattering from the instrument or sample holder. To this end, an in-house 3D-printed holder was designed and printed, where the dimensions of the contact area with the supported sample are significantly smaller than the actual sample. The film is held in place by air suction, eliminating the need for adhesives or clamps, which further restricts any sample holder and signal contamination. The holder was manufactured using a Form 2 stereolithographic printer, using a rigid resin from the engineering Resin family of materials available from formlabs ( https://formlabs.com/eu/ ). An illustration and/or computer-aided design files are available upon request. A beam mask and a Soller slit, combined with the focusing mirror and the divergence slit, kept the over-illuminated area of the sample, as well as the illuminated area of air, to a minimum. This approach avoids the use of any kind of clamp that could partially shadow the sample surface and give rise to additional parasitic scattering.
( ) Measured air (open circles) and sample (filled circles) intensity, with and without variable 1D pixel detector selection, where the number of averaged pixels is chosen either to follow throughout the scan. ( ) The 1D detector image together with the outlined condition for constant detection footprint as the dotted line. Due to the use of the parallel collimator, intensity striping appears. |
Series of GI X-ray diffraction scans with a different choice of incidence angle, normalized to the first local maximum of the ω = 0.17° scan. The red dashed lines correspond to the Compton intensity of the substrate. The gray regions highlight the presence of Bragg peaks at high angles which are eliminated at lower ω angles. |
The advantage of filtering out the Compton scattering is that, for thinner films, the high-angle signal from the film is drowned out by the Compton scattering, such that even if compensated for by the known theoretical signal from the substrate, the fluctuations associated with the Poisson noise would always be present. Furthermore, owing to optical effects near the critical angle, it is not completely straightforward to know beforehand the exact ratio of substrate-to-film Compton scattering.
( ) illustrates the electric field intensity in the sample stack. Region (I) represents the air layer above the sample, (II) the protective Al O layer, (III) the amorphous V Zr layer and (IV) the substrate layer. The inset ( ) shows the predicted absorption ratio / between the substrate and film at different incidence ω angles, compared with the corresponding measured and fitted Compton profile prefactor / , shown as red solid circles. |
Low-angle ω scans with the detector angle fixed at the first intensity maximum (see Fig. 6 ) of V Zr metallic glass, with a thickness of 324, 162, 81, 41 and 10 nm. The flat region ω < 0.1° corresponds to conditions below the critical angle, and the intensity decrease seen in the region above above ω > 0.2° is due to over-illumination and Compton scattering from the substrate. |
We note that the intensity ratio between the maximum and the intensity at 1° decreases as the film becomes thinner, since the Compton scattering from the substrate starts to dominate. In principle, it is the ratio of the film scattering to the substrate scattering that should be maximized, not only the film scattering. We found by trial and error that choosing the incidence angle slightly lower than the angle indicated by the maximum is a good choice.
We are now in a position to combine all these considerations, i.e. utilizing a crystalline substrate, tuning the detector opening to a constant detector footprint, minimizing scattering from the sample holder and fluorescence, and carefully choosing the incidence angle. What remains are the standard corrections for polarization, depth absorption profiles and known Compton contributions to equate the measured intensity to the elastic coherent scattering of the metallic glass film.
After subtracting dark counts and suppressing fluorescence, air scattering and coherent scattering from the substrate, one obtains the relation for the coherent scattering of the film I f,coh as
The measured normalized scattering intensity in electron units [ = /( )], with (open squares) and without (solid circles) energy filtering. The corresponding Compton profiles are shown in the left inset and applied correction factors in the right inset. The black dashed line is 〈 〉, the red dashed line corresponds to the Compton intensity from the film and the solid red line represents the Compton intensity of the substrate. |
With the reduced PDF, G ( r ) can be converted into the true PDF g ( r ) using
The structure factors ( ) from amorphous V Zr of nominal thickness = {324, 162, 81, 41, 10} nm. The solid circles are measured using a wide energy discriminator window while the open squares are measured with a narrow energy window. The gray regions indicate the presence of minor stray Bragg peaks. The gray solid lines correspond to the theoretical computed via density functional theory. The structure factors have been shifted vertically for clarity. |
At this stage, we can see excellent agreement between the experimental results and theory, especially for the 324 nm-thick sample. By using the theoretical predictions as a figure of merit, we can see that we have good agreement and consistency in a region Q < 6 Å −1 for virtually all sample thicknesses except for the 10 nm one. However, what is also apparent is that the data for Q > 7 Å −1 become more and more noisy, and the severity scales with the diminishing thickness of the samples. Minor substrate Bragg peaks make their appearance in this region, which we were unable to completely eliminate. This might be remedied by another surface cut or choice of substrate. Data points corresponding to the crystalline peaks have been removed as indicated by the gray lines. Even if the Compton profile is accurately subtracted for the unfiltered scan, the fluctuations associated with the shot noise can never be removed. Therefore it is more desirable to filter out the Compton scattering if possible.
The reduced PDFs ( ) from amorphous V Zr of nominal thickness = {324, 162, 81, 41, 10} nm, determined from the set of structure factors measured with a narrow energy window and transformed up to = {11.5, 11.5, 11.5, 8, 7} Å , respectively. The gray dots are the theoretical predictions. The reduced PDFs have been shifted for clarity. |
We have shown that the PDF of thin metallic glasses can be measured down to at least 80 nm by considering systematic improvements in several areas with respect to how the PDFs are measured in the GI geometry. Owing to the X-ray scattering of materials being proportional to the square of the atomic number and the studied films having an average atomic number of about 34, the method is applicable to a wide range of materials. Further improvements can be gained by a judicious choice of substrate, higher quantum efficiency and a larger detector window. Further small gains may be found by using a parallel plate collimator with a larger angular divergence. These results are useful for structural analysis of thin films, which may have correlations different from those found in the bulk. The method can be used as pre-screening for further studies at synchrotron facilities, whereby the PDF can be measured much faster and with higher real-space resolution. These findings clear the path for the possibility of utilizing standard modern diffraction equipment to determine thin film PDFs.
The incident angle ω and detector angle β yield the scattering angle, 2 θ , which can be expressed in terms of its corresponding wavevector modulus:
where λ is the wavelength of the X-ray beam.
in which ρ corresponds to the atomic density of the material.
The reduced PDF for sufficiently small r , i.e. r ≤ r 0 , is linear and proportional to the density:
in which a , b , c are the parametrization coefficients of the form factor of a particular atom.
For a homogeneous alloy, one can approximate 〈 f 2 〉 and 〈 f 〉 2 as a mole fraction weighted sum of the constituent species, following
n is the number of elements in the alloy and c i is the mole fraction of element i .
Here Z i is the atomic number of element i . The Compton intensity of the layers was approximated as a mole fraction weighted sum of ( I com ) i , together with the characteristic wavelength shift of the Compton spectrum, according to
in which h is Planck's constant, m e is the electron mass and c is the speed of light.
The linear attenuation coefficient can be constructed as
where r e is the electron radius.
The polarization correction is
where d is the detector slit width and J is the illuminated footprint encompassed within the largest projected detector opening d max . If the detector slit, or the pixels in the detector image as presented in this work, can be constructed to follow
then C = 1, and no correction is needed.
( ) The normalized scattering intensity in electron units [ = /( )] of bulk SiO with energy filter (blue) and without energy filter (red). ( ) The respective Compton profiles of the two scans, with the resulting explicit filtered profile in black, determined via their difference = − ( − ). ( ) The structure factors of the respective scans, with the traced structure-factor data from Mozzi & Warren (1969 ) represented by the solid black line. |
The authors acknowledge a productive partnership with Malvern Panalytical.
The authors acknowledge the Swedish Research Council (VR, Vetenskapsrådet) grant No. 2018-05200 for funding the work and the Swedish Energy Agency grant No. 2020-005212. GKP also acknowledges the Carl-Tryggers Foundation grant Nos. CTS 17:350 and CTS 19:272 for financial support. The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC), partially funded by the Swedish Research Council through grant agreement No. 2018-05973.
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence , which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.
Follow J. Appl. Cryst. |
COMMENTS
Example: Simple random sampling. You want to select a simple random sample of 1000 employees of a social media marketing company. You assign a number to every employee in the company database from 1 to 1000, and use a random number generator to select 100 numbers. 2. Systematic sampling.
Cluster sampling is advantageous for those researcher s. whose subjects are fragmented over large geographical areas as it saves time and money. (Davis, 2005). The stages to cluster sa mpling can ...
Understand sampling methods in research, from simple random sampling to stratified, systematic, and cluster sampling. Learn how these sampling techniques boost data accuracy and representation, ensuring robust, reliable results. Check this article to learn about the different sampling method techniques, types and examples.
Simple random sampling. Simple random sampling involves selecting participants in a completely random fashion, where each participant has an equal chance of being selected.Basically, this sampling method is the equivalent of pulling names out of a hat, except that you can do it digitally.For example, if you had a list of 500 people, you could use a random number generator to draw a list of 50 ...
Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.
In snowball sampling, the samples are added to the survey like a chain. The first group of survey subjects is chosen by the researcher, and then the subsequent set of participants is added to the ...
This is often used to ensure that the sample is representative of the population as a whole. Cluster Sampling: In this method, the population is divided into clusters or groups, and then a random sample of clusters is selected. Then, all members of the selected clusters are included in the sample. Multi-Stage Sampling: This method combines two ...
We could choose a sampling method based on whether we want to account for sampling bias; a random sampling method is often preferred over a non-random method for this reason. Random sampling examples include: simple, systematic, stratified, and cluster sampling. Non-random sampling methods are liable to bias, and common examples include ...
1. Simple random sampling. In a simple random sample, every member of the population has an equal chance of being selected. Your sampling frame should include the whole population. To conduct this type of sampling, you can use tools like random number generators or other techniques that are based entirely on chance.
Researchers do need to be mindful of carefully considering the strengths and limitations of each method before selecting a sampling technique. Non-probability sampling is best for exploratory research, such as at the beginning of a research project. There are five main types of non-probability sampling methods: Convenience sampling. Purposive ...
To do stratified sampling, you would: a. Divide the toys into three strata (subgroups) based on their type: cars, dolls, and puzzles. b. Calculate the proportion of each stratum in the sample. Since you want a sample of 20 toys, and the box has 100 toys, you'll select 20% of each stratum: Cars: 50 × 20% = 10 cars.
Stage 5: Collect Data Once target population, sampling frame, sampling technique and sample size have been established, the next step is to collect data. F. Stage 6: Assess Response Rate Response rate is the number of cases agreeing to take part in the study. These cases are taken from original sample.
Sampling types. There are two major categories of sampling methods ( figure 1 ): 1; probability sampling methods where all subjects in the target population have equal chances to be selected in the sample [ 1, 2] and 2; non-probability sampling methods where the sample population is selected in a non-systematic process that does not guarantee ...
Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.
Research sampling techniques refer to case selection strategy — the process and methods used to select a subset of units from a population. While sampling techniques reduce the costs of data ...
The four steps of simple random sampling are (1) defining the population, (2) constructing a list of all members, (3) drawing the sample, and (4) contacting the members of the sample. Stratified random sampling is a form of probability sampling in which individuals are randomly selected from specified subgroups (strata) of the population.
Sampling can be defined as the process through which individuals or sampling units are selected from the sample frame. The sampling strategy needs to be specified in advance, given that the sampling method may affect the sample size estimation. 1,5 Without a rigorous sampling plan the estimates derived from the study may be biased (selection ...
The Manual for Sampling Techniques used in Social Sciences is an effort to describe various types of sampling methodologies that are used in researches of social sciences in an easy and understandable way. Characteristics, benefits, crucial issues/ draw backs, and examples of each sampling type are provided separately.
in research including Probability sampling techniques, which include simple random sampling, systematic random sampling and strati ed. random sampling and Non-probability sampling, which include ...
Purposive sampling has a long developmental history and there are as many views that it is simple and straightforward as there are about its complexity. The reason for purposive sampling is the better matching of the sample to the aims and objectives of the research, thus improving the rigour of the study and trustworthiness of the data and ...
Non-probability sampling methods are commonly used in qualitative research where the richness and depth of the data are more important than the generalizability of the findings. If that all sounds a little too conceptual and fluffy, don't worry. Next, we'll take a look at some actual sampling methods to make it a little more tangible.
The aim of the article was to review the purposive sampling types as discussed by Patton (1990) and exemplify them in line with the current trends in the studies being conducted today.
The SEI Digital Library provides access to more than 6,000 documents from three decades of research into best practices in software engineering. These documents include technical reports, presentations, webcasts, podcasts and other materials searchable by user-supplied keywords and organized by topic, publication type, publication year, and author.
The early work on the theory and some experimental aspects of measuring PDFs from supported and free-standing films can be traced to Wagner (1969), who emphasized the use of a free-standing film or a substrate with as low an atomic number as possible. Eguchi et al. (2010) determined the PDFs of 500 nm-thick amorphous indium zinc oxide films utilizing an X-ray tube with Mo K α radiation.
Also, in the case of a small. sample set, a representation of the entire population is more likely to be compromised ( Bhardwaj, 2019; Sharma, 2017 ). 3.2. Systematic Sampling. Systematic sampling ...
The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random ...