Dear Dr.Axel Kohlmeyer,
I’m sorry I thought there was a need to explain physics.
you don’t have a problem with your physics, but a technical problem. otherwise this would be off-topic for this mailing list anyway.
i do understand why you are defensive about the physics because what you are doing is rather obviously a very flawed model, but I don’t see much of a point in arguing about that. over the years I have learned the more flawed a model is, the less open people are to being told that. there are plenty of technical issues as well.
information about the script:
pair_style:
pair_style buck/coul/cut 12.0
pair_coeff 1 1 0.0 1.00 0.00 e
pair_coeff 1 2 0.0 1.0 -1448.0
pair_coeff 2 2 0.0 1.00 0.00
I use NVE
fix MicrocanonicalEnsemble all nve
these are both rather irrelevant. in this case.
When I am looking to calculate the properties of electrons in later times, practically only external electrical potential enters them, so I expect the amount of calculations to be less, but because I have to enlarge the box so that the electrons are inside the box. Lammps does a series of calculations that I do not know where the source of these calculations is and can I make a change to delete these calculations?
you are not making sense here. you specifically mentioned the pair style above and now you claim that should not cause computational cost.
if you don’t want to compute the pairwise interactions you have to use pair_style none or zero.
the box size is irrelevant as well, since you use shrinkwrap boundary conditions. the box will be adjusted to the minimum amount needed at the first step. when running in parallel, you risk using atoms exactly because of setting such a large initial box as it will shrink immediately at the first MD step. the initial box with shrinkwrap conditions would be so that it just covers the actual system size.
if you want your box not to grow excessively, you should use a box with fixed boundaries and soft, reflective walls.
memory use is also not an issue. the number of particles is too small and when running across multiple CPUs it is even less. I see a use less than 100MB in total. that is miniscule on a machine with over 30GB RAM.
now let’s have a look at your timing output :
Loop time of 130.493 on 4 procs for 6000 steps with 3743 atoms
Performance: 0.002 ns/day, 12082.723 hours/ns, 45.979 timesteps/s
94.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total