Re: [lammps-users] Help about accelerate script

Dear Dr.Axel Kohlmeyer,

thanks for the reply.
I tried to simulate the dynamics of a sphere that includes electrons and protons in the external field (Suppose an electron is a proton with opposite charge and weight 1/2000).
In the first stage of my simulation, the energy of protons in the external field and Columb field of protons and electrons is important for me. so each electron that meets the boundary of the box I removed by thermo_modify lost ignore command because it has not affected proton energy.

Now, I tried to find the mean energy of electrons so I can not remove them so I changed my boundary of the simulation box from fixed to shrink-wrapped with a minimum value.

But now I have a problem with so long duration of time simulation and usage of RAM by lammps because the size of the box is increased, on the other hand, I can not use periodic boundary because the external field depends on time and space.

Also, I use mpirun for parallel computations.
Now I try to find a way to hands decrease simulation time duration and usage of RAM.

Best regards,

Mahammad

sorry, but that is not helping at all. all you are doing is to state what you already wrote, just with more words. there is no additional and required information and detail here.
you could have just as well written “I have (some random software) and it is using too much RAM and running too slow. what can I do to make it use less RAM and run faster?”

pick any software you know about, how would you answer such an nonspecific question?
you are a scientist, right? as such you should be able to describe a problem precisely using relevant information.

it doesn’t matter at all what it is that you want to model. the computer doesn’t care and neither does the software. they just follow the instructions given. so what matters is how you implement your model, what settings you use and what version of LAMMPS, how compile and on what platform with how many CPUs etc. and why you think it is too slow and using too much RAM and so on and so forth.

Axel.

Dear Dr.Axel Kohlmeyer,

I’m sorry I thought there was a need to explain physics.

information about the system:

I use LAMMPS (3 Mar 2020) on 16.04.1-Ubuntu x86_64. I compiled lammps by intel compiler.
and my hardware is:

CPU: Intel® Core™ i7-2700K CPU @ 3.50GHz with 8 core
RAM: 31G

information about the script:

pair_style:

pair_style buck/coul/cut 12.0
pair_coeff 1 1 0.0 1.00 0.00 e
pair_coeff 1 2 0.0 1.0 -1448.0
pair_coeff 2 2 0.0 1.00 0.00

I use NVE

fix MicrocanonicalEnsemble all nve

When I am looking to calculate the properties of electrons in later times, practically only external electrical potential enters them, so I expect the amount of calculations to be less, but because I have to enlarge the box so that the electrons are inside the box. Lammps does a series of calculations that I do not know where the source of these calculations is and can I make a change to delete these calculations?

I hope I have been able to raise the issue clearly.

Thanks again.
Best regards,
Mohammad

Using a cutoff of 12 Angstrom for Coulomb is generally not realistic due to the long-range nature. More realistic could be a Wolf or DSF summation or kspace_style MSM which supports non-periodic BCs.

But it seems like your simulation has other issues that need to be fixed first. You may need to use a small cluster and parallelize your simulation to more than on CPU if you are running out of RAM.

Stan

Dear Dr.Axel Kohlmeyer,

I’m sorry I thought there was a need to explain physics.

you don’t have a problem with your physics, but a technical problem. otherwise this would be off-topic for this mailing list anyway.

i do understand why you are defensive about the physics because what you are doing is rather obviously a very flawed model, but I don’t see much of a point in arguing about that. over the years I have learned the more flawed a model is, the less open people are to being told that. there are plenty of technical issues as well.

information about the script:

pair_style:

pair_style buck/coul/cut 12.0
pair_coeff 1 1 0.0 1.00 0.00 e
pair_coeff 1 2 0.0 1.0 -1448.0
pair_coeff 2 2 0.0 1.00 0.00

I use NVE

fix MicrocanonicalEnsemble all nve

these are both rather irrelevant. in this case.

When I am looking to calculate the properties of electrons in later times, practically only external electrical potential enters them, so I expect the amount of calculations to be less, but because I have to enlarge the box so that the electrons are inside the box. Lammps does a series of calculations that I do not know where the source of these calculations is and can I make a change to delete these calculations?

you are not making sense here. you specifically mentioned the pair style above and now you claim that should not cause computational cost.
if you don’t want to compute the pairwise interactions you have to use pair_style none or zero.
the box size is irrelevant as well, since you use shrinkwrap boundary conditions. the box will be adjusted to the minimum amount needed at the first step. when running in parallel, you risk using atoms exactly because of setting such a large initial box as it will shrink immediately at the first MD step. the initial box with shrinkwrap conditions would be so that it just covers the actual system size.

if you want your box not to grow excessively, you should use a box with fixed boundaries and soft, reflective walls.

memory use is also not an issue. the number of particles is too small and when running across multiple CPUs it is even less. I see a use less than 100MB in total. that is miniscule on a machine with over 30GB RAM.
now let’s have a look at your timing output :

Loop time of 130.493 on 4 procs for 6000 steps with 3743 atoms

Performance: 0.002 ns/day, 12082.723 hours/ns, 45.979 timesteps/s
94.3% CPU use with 4 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total