Compute xrd, c factor and 2Theta

Hi Coleman,
I want to use the compute xrd for calculating xrd in lammps.

I went through the compute page

I have some doubts regarding the compute. I am more of a experimentalist, so some questions can be basic.

  1. Usually the XRD pattern is obtained w.r.t a surface, i.e, 2Theta is w.r.t sample surface. But in the compute, there was no mention w.r.t what reference frame 2Theta is measured when it computes xrd pattern. Also, for the same sample, is it possible to vary the plane in simulation cell w.r.t which 2Theta is measured?

  2. some light on the ‘c’ keyword and how to select the value.

  3. as i want to measure the xrd for the dump files i have, is run 1 for fix ave/histo will do the job or do i have to run simulation again ?

Thanks and Regards

Anuj Bisht

Hi Anuj

Thanks for your questions. I’ll try to respond here:

As it is written now, compute xrd was set up to generate powder diffraction patterns. It computes diffraction intensities in 3D space that is fed to fix ave/histo/weight which performs a spherical integration that assumes all orientations are equally probably. Thus, the questions about the wrt the surface of incidence are not applicable (yet). I do have intentions to further develop compute xrd so that one can output more information – similar to compute saed – which could enable different interrogations of the data. Said in another way, if you want to see 2D or 3D diffraction data at this time you will need to look into compute saed.

The c parameters (c1 c2 c3) in both compute xrd and saed control the spacing along the x* y* and z* directions within the reciprocal space grid. There exact meaning can indicate 2 things depending if the manual keyword is included as an option for the compute xrd command.

a) If the manual keyword is not included (default), then the spacing along the x* y* and z* directions of reciprocal space will be c1/X, c2/Y and c3/Z where X, Y, and Z are the dimensions of the simulation box. Note, we use the inverse spacing of the simulation dimensions to help ensure that points examined within the reciprcal space mesh may be close to a point of Bragg reflection. With the manual keyword not included I’d suggest using a cvalues of 1,1,1 to start – if your simulation is very large use cvalues greater 1 if your simulation is small cvalues can be less than 1.

b) If the manual keyword is used in the compute xrd command, then then the c values will be the spacing of the reciprocal space mesh (in inverse units). So, the mesh points will be spaced by c1, c2, and c3 in the x*, y*, and z* directions. This option was added to easily fix the grid spacing in order to guarantee that the same points of reciprocal space were sampled regardless of simulation size. With the manual keyword included I’d suggest using cvalues of 0.01, 0.01, 0.01 to start.

Please note that you should optimize the value of the c parameters for your particular problem. The finer the spacing, the greater the sampling of reciprocal space, and thus the ‘better’ the representative. However, this comes at a cost of computing the structure factor at each of those points. So to optimize, I’d suggest using a fairly course grid spacing then seeing how much the pattern changes as you go to finer spacing – compare that to the computational cost and decide what is best for your situation.

Please also note that the echo keyword added to compute xrd (and compute saed) helps print out some debugging information just for this purpose. With the echo keyword included in these computes, the number of reciprocal lattice points that will be included within the calculation will be reported to the standard out before computation takes place. In addition, the computes will report in 10% increments the progress of the compute.

Assuming that your dump files contain the atom type and positions, you can definitely use these as an input for computing the diffraction patterns-- I do this a lot. In fact you can even “run 0”.

There is an added benefit of computing diffraction patterns (compute xrd & compute saed) separately from the MD simulation in that you can reduce the overall memory usage by invoking OpenMP threads. If OpenMP threads are set, then the compute will use OpenMP parallelization over the reciprocal lattice mesh. MPI processes can still be invoked, parallelizing over the spatial decomposition of the atoms. Because of the hybrid parallizaiton, the overall memory footprint is reduced.

-Shawn Coleman

Thanks Shawn, I will try as suggested.

Looking forward to addition to compute xrd.