rdf comparison for LAMMPS/VMD

Dear Sirs,

I would like to ask if there were any setting that would allow to obtain identical results with VMD (verified by a script of my own) for rdf calculations. Attached are input files and a comparison plot for a sample configuration.

Best regards,

Osvalds Verners

TU Delft, Faculty of Civil Engineering and Geosiences

Structural Engineering Department

input.tar.bz2 (134 KB)

The compute rdf command calculates it

out to a cutoff. Its doc page tells you

how to time average it as well as how

to use the rerun command to calculate

an RDF for distances longer than the cutoff.

Steve

Dear Steve,

Thank you for the response, but I still would like to find out what accounts for the observed differences, referring to the previous attachment, between VMD and compute_rdf results if, reportedly, the same cut-off, number of snapshots (1 for simplicity) and bin size are being used.

Best regards,

Osvalds

I have no idea. If you produce a dump shapshot for a simple

small system, you can calculate the RDF yourself and verify

that it is the same as what LAMMPS and/or VMD are giving you.

Steve

despite its simplicity, computing a g(r) correctly requires attention
to some subtle details. the graphs look as if in LAMMPS the
normalization is off, which can happen, when the g(r) normalization
does not consider the finite size of the system which results (which
leads to fewer pairs)

Thank you for the comments,

Osvalds

From the compute rdf doc page:

The g® value for a bin is calculated from the histogram count by scaling it by the idealized number of how many counts there would be if atoms of type jtypeN were uniformly distributed. Thus it involves the count of itypeN atoms, the count of jtypeN atoms, the volume of the entire simulation box, and the volume of the bin’s thin shell in 3d (or the area of the bin’s thin ring in 2d).

I don’t know what other kind of normalization would be

more correct for g®.

Steve

From the compute rdf doc page:

The g(r) value for a bin is calculated from the histogram count by scaling
it by the idealized number of how many counts there would be if atoms of
type jtypeN were uniformly distributed. Thus it involves the count of itypeN
atoms, the count of jtypeN atoms, the volume of the entire simulation box,
and the volume of the bin's thin shell in 3d (or the area of the bin's thin
ring in 2d).

I don't know what other kind of normalization would be
more correct for g(r).

you have to determine, how many atoms show up in both groups.
so the total normalization would be:

Vol_box / (num_i * num_j - duplicates) / vol_slice

thus for a simple single type system, you would have N*(N-1) instead of N*N.

axel.