[lammps-users] Parameters for LJ UNITS

Hello,

So, as i want to convert parameters from dimensionless to SI (or real, metal…) i need to know sigma, epsilon and mass for special material.

For instance, my material is liquid argon (or xenon). All that i found is a mass, but what about sigma and epsilon? Could you suggest me a source where i will be able to get these parameters?

Thanks,

Sergey R., a student of Moscow Power Engineering Institute.

Lennard-Jones parametrs are fairly widely reported in the literature, so the Google and the Interwebs are your friends on this one. . . .

–AEI

Hello,

So, as i want to convert parameters from dimensionless to SI (or real,
metal...) i need to know sigma, epsilon and mass for special material.

For instance, my material is liquid argon (or xenon). All that i found is
a mass, but what about sigma and epsilon? Could you suggest me a source
where i will be able to get these parameters?

Lennard-Jones parametrs are fairly widely reported in the literature, so the
Google and the Interwebs are your friends on this one. . . .

actually, for argon there is _the_ classic paper by
Aneesur Rahman from 1964. Phys Rev 136: A405-A411

there are probably some "better" parameters, but this one is
really hard to miss. :wink:

cheers,
   axel.

I'm running a really big simulation and have run into the error "PPPM grid is too large". Reducing the precision isn't a good option. Can I increase OFFSET to allow for a larger grid without screwing something else up? It seems so, but I was thinking someone more familiar with the code might know otherwise.
Thanks,
Matt

hi matt,

I'm running a really big simulation and have run into the error "PPPM
grid is too large". Reducing the precision isn't a good option. Can I

you can change the order, or you can change the
real space coulomb cutoff. both will influence the grid
spacing without affecting the accuracy.

is your system a "dense" system, or do you have vaccuum regions?

cheers,
   axel.

It's dense - mostly water at 1g/cm3. The problem is just in one dimension, with a grid size of 5400.
Thanks,
Matt

Quoting Axel Kohlmeyer <[email protected]>:

It's dense - mostly water at 1g/cm3. The problem is just in one dimension,
with a grid size of 5400.

ok. i just checked empirically. you can safely crank it up to 8192.
i ran some tests and both versions with and without the larger size
give identical results for me. didn't try as huge system, tho.

i assume you're going to run this on a fairly large
machine at some later point, right? if yes, you may
be interested to try out some of the stuff that i've
been working on recently. i've stumbled across
some "secret sauces" that can help speeding up
LAMMPS significantly, when you run on a large number
of processors.

cheers,
    axel.

I'll check on this 4096 limit. There was some reason, that
I can't recall now.

Axel, what's the "secret sauce" ?? Do they put it
on Philly cheese steaks?

Steve

A larger OFFSET looks fine - I bumped it up to 16384, and it will be
in the next patch.

Steve

I'll check on this 4096 limit. There was some reason, that
I can't recall now.

Axel, what's the "secret sauce" ?? Do they put it
on Philly cheese steaks?

no, the cheese would clog everything up.

i mean the multi-threading and fourier transform
stuff that i've been working on recently. if you run
on very large (multi-core) machine "rules" of what
is a good choice of settings keep changing. it is
the most extreme if you do implicit solvent or coarse
grain simulations with few charged particles.
i have inputs where a modified version of ewald
is almost as fast as a similarly adapted version
of pppm. we're running validations and benchmarks
over the next couple of weeks and hope to have
something very presentable soon. i already sent
you some preliminary results. i hope you got them...

the fourier transform still needs some work, tho...

cheers,
    axel.

Thanks Axel, Steve. The simulation seems to be running fine now with OFFSET at 8192. Why the powers of 2, anyway?

Axel, the total simulation will be around a half million atoms, all with charges. I'll probably have access to a few hundred procs. I'm very interested in this secret sauces. Do tell.
Matt

Quoting Steve Plimpton <[email protected]>:

Thanks Axel, Steve. The simulation seems to be running fine now
with
OFFSET at 8192. Why the powers of 2, anyway?

computers like powers of two. for modern CPUs alignment
of memory addresses is a big issue. depending on hardware
and compiler you can get a significant performance increase
by replacing malloc() with posix_memalign() and aligning all
memory allocations to 16 byte boundaries.

Axel, the total simulation will be around a half million atoms, all
with charges. I'll probably have access to a few hundred procs.
I'm
very interested in this secret sauces. Do tell.

we are still working on it, particularly validation and
benchmarks, but you can get it from.
http://sites.google.com/site/akohlmey/software/lammps-icms

even without compiling with OpenMP support, almost all
/omp pair styles have a little performance boost. as soon
as you get to the point where either kspace or communication
start to dominate, you want to switch from plain MPI parallelization
to MPI+OpenMP and get an extra performance boost. best case
so far was a factor of 2.3x for a best MPI effort vs. best
MPI+OpenMP effort (not counting nodes, the MPI/OpenMP effort
needs less nodes to get a better performance).

for systems with all charges, i'm down to where only
the time spent in Kspace is significant and i'm trying
to cut that down now, too.

cheers,
    axel.

A larger OFFSET looks fine - I bumped it up to 16384, and it will be
in the next patch.

steve,

while transferring this change to my pppm variant
for coarse grain systems i noticed that pppm_tip4p
was without. you probably want to keep that one
consistent with the rest.

cheers,
    axel.