Hi LAMMPS,
I am trying to minimise the energy for the slightly overlapped system with force -
pair_style soft 1.00
pair_modify shift no mix arithmetic
Which is cosine potential, and this and its higher derivative vanishes smoothly at the cutoff.
- I know that zero energy can be active for the system and the forces can be small. Thus, I am using for following minimisation rule -
min_style cg ## Conjugate gradient to be used
min_modify dmax 0.2 line forcezero
minimize ${etol} ${ftol} ${maxiter} ${maxeval}
run 1
with variable to be etol=1e-1, ftol =1e-20, maxiter =1e6 and maxeval =1e5.
The simulation output in the terminal seems to suck after 3 iterations -
Step PotEng Press Fmax Fnorm
0 4.9174114780115e-13 5.6107458619855e-08 0.00014556985069157 0.00021807084317001
1 2.2573054536679e-15 2.6896682336094e-09 8.1322141072616e-06 1.2916958094125e-05
2 0 5.6387767220305e-12 1.8437555783155e-08 2.916287797272e-08
You can see that once potenergy is zero, the minimisation seems to hit the no-loop-termination conditions. Even if ${ftol} is small, this should give the next iteration, which is not the case.
The best solution would be to stop minimisation once the energy is zero, âbearing the forces are very smallâ.
I tried to play with drmax, which is not helpful. Also, the making etol higher may not give me correct physics.
I am using LAMMPS with intel compilers and â~/lammps-17Apr2024/src/lmp_intel_cpu_intelmpiâ as executable.
Thanks for looking into this.
Please have a good look at the forum guidelines and suggestions post to learn how to correctly format quoted LAMMPS inputs and other files with special characters. Your input is almost unreadable (and it is missing lots of stuff, too).
If there is no interaction energy with a repulsive-only potential, there are no more forces and the minimizer would not know where to go. what you are looking at in your output is effectively all zeros plus a little bit of numerical noise.
I donât know what kind of âphysicsâ you expect, but what you show is what LAMMPS computes from the input you provide and it looks LAMMPS does what it is asked to do. That is all you can expect. LAMMPS doesnât know about physics, it will just blindly execute what it is programmed to do and that is based on your input.
Dear Akohlmey,
Regarding your comment
If there is no interaction energy with a repulsive-only potential, there are no more forces, and the minimizer would not know where to go. What you are looking at in your output is effectively all zeros plus a little bit of numerical noise.
I understand, and this has always been clear to me. I donât have a problem with three energy to be zero, what I am saying is that once energy = 0 is found, the lammps minimisation is stuck there and does not come out of that routine and doesnât output anything.
Which is why I could not show you the fourth set.
Possibly, the various âetolâ donât help there to come out of the loop, once the energy is zero. I hope I am able to explain the problem beyond the âphysicsâ and âscript/notationâ.
Regards,
Anshul
There is not much that can be done from the outside unless it is possible to reproduce what you are seeing, but since you only provide tiny fragments of your input deck, there is no chance for that. There are multiple possible causes of getting stuck and they can only found out with proper debugging of the source code and an executable.
So here are a bunch of questions for you:
- does this happen when running in parallel or also when running with just one processor or other processor counts?
- does this happen when you compile a different executable with a different compiler and/or a different MPI library
- can you reproduce this with the (non-MPI) static binary posted here
- does this happen only for the one specific system that you are trying to minimize or other systems with the same pair style settings and pair coefficients?
- can you provide an input (and data) file that is very small (a few hundred atoms) that shows the same behavior and can you post it here?
Dear Akohlmey,
Thanks, here is my pointwise response.
I want to attach some files for your perusal. but it says ânew users can not attach filesâ
â Files I want to send 1. readme, 2. script, 3. input sample and 4. output files.
Let me know if there is another method for the file sharing.
does this happen when running in parallel or also when running with just one processor or other processor counts?
The error and its pattern in independent of the mpi vs serial vs number of processors used.
does this happen when you compile a different executable with a different compiler and/or a different MPI library
I have used lammps-17Apr2024 and lammps-30Jul2021(with custom potential) and both with intel and gnu compilers across different platforms with MAC and intel clusters.
can you reproduce this with the (non-MPI) static binary [posted here]
Thanks, I just checked and the error persists with exactly same pattern (see readme).
does this happen only for the one specific system that you are trying to minimize or other systems with the same pair style settings and pair coefficients?
I use the same script for the LJ system and it runs perfectly as it is supposed to be by the rulebook.
can you provide an input (and data) file that is very small (a few hundred atoms) that shows the same behavior and can you post it here?
I want to share the system of 2000 particles (donât worry it runs quickly~2 mins and is not data heavy) and the relevant script for you to have a look at this.
Thanks a lot for having a look into this.
Regards,
Anshul
You should be able to attach files now.
Iâve tried to reproduce this with the following input file, which also reaches a zero energy after a few steps but there are no problems with getting stuck.
region box block 0 4 0 4 0 4
create_box 1 box
create_atoms 1 random 20 9834 box
pair_style soft 1.0
pair_coeff * * 10.0
pair_modify shift no mix arithmetic
mass 1 1.0
thermo_style custom step pe press fmax fnorm
thermo_modify format float %.15g
thermo 1
min_style cg ## Conjugate gradient to be used
min_modify dmax 0.2 line forcezero
minimize 1e-1 1e-20 100000 10000
And I get the following log file:
LAMMPS (17 Apr 2024)
using 1 OpenMP thread(s) per MPI task
region box block 0 4 0 4 0 4
create_box 1 box
Created orthogonal box = (0 0 0) to (4 4 4)
1 by 1 by 1 MPI processor grid
create_atoms 1 random 20 9834 box
Created 20 atoms
using lattice units in orthogonal box = (0 0 0) to (4 4 4)
create_atoms CPU = 0.000 seconds
pair_style soft 1.0
pair_coeff * * 10.0
pair_modify shift no mix arithmetic
mass 1 1.0
thermo_style custom step pe press fmax fnorm
thermo_modify format float %.15g
thermo 1
min_style cg ## Conjugate gradient to be used
min_modify dmax 0.2 line forcezero
minimize 1e-1 1e-20 100000 10000
Generated 0 of 0 mixed pair_coeff terms from arithmetic mixing rule
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 1.3
ghost atom cutoff = 1.3
binsize = 0.65, bins = 7 7 7
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair soft, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d
bin: standard
Per MPI rank memory allocation (min/avg/max) = 4.188 | 4.188 | 4.188 Mbytes
Step PotEng Press Fmax Fnorm
0 1.28436391301183 0.570856298729763 28.2261847512922 83.3718587468047
1 0.71194191148135 0.544255295920486 22.5370115172058 64.1784570198596
2 0.0925123104723432 0.155713580397347 11.0981697222272 26.5341587607004
3 0.000760655578884217 0.0125710157458266 1.13833844912318 2.45021750498461
4 1.43882205017043e-06 0.000498628987540235 0.043540366486898 0.106585506996208
5 4.333676917323e-09 3.0392722104665e-05 0.0027113073511807 0.00431743294230945
6 3.93892141126173e-11 2.053834910071e-06 0.00024092173840571 0.00055767797565208
7 0 7.46677506811399e-16 8.75874313643395e-14 2.02744599715509e-13
Loop time of 6.4727e-05 on 1 procs for 7 steps with 20 atoms
43.3% CPU use with 1 MPI tasks x 1 OpenMP threads
Minimization stats:
Stopping criterion = energy tolerance
Energy initial, next-to-last, final =
1.28436391301183 3.93892141126173e-11 0
Force two-norm initial, final = 83.371859 2.027446e-13
Force max component initial, final = 28.226185 8.7587431e-14
Final line search alpha, max atom move = 0.0050660592 4.4372311e-16
Iterations, force evaluations = 7 20
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.189e-05 | 1.189e-05 | 1.189e-05 | 0.0 | 18.37
Neigh | 5.67e-06 | 5.67e-06 | 5.67e-06 | 0.0 | 8.76
Comm | 7.38e-06 | 7.38e-06 | 7.38e-06 | 0.0 | 11.40
Output | 2.0679e-05 | 2.0679e-05 | 2.0679e-05 | 0.0 | 31.95
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 1.911e-05 | | | 29.52
Nlocal: 20 ave 20 max 20 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 59 ave 59 max 59 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 22 ave 22 max 22 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 22
Ave neighs/atom = 1.1
Neighbor list builds = 1
Dangerous builds = 0
Total wall time: 0:00:00
Dear Akohlmey,
Thanks for allowing me to attach the files and working on it.
In your case, with my experience etol is quite high (1e-1), for my configuration with etol 1e-4, my script runs too, but honestly, itâs high value, too. I am looking for etol <=1e-10.
I am attaching my files.
Regards,
Anshul
stress-pe.dat (51 Bytes)
in.uniform_soft (5.2 KB)
run.sh (117 Bytes)
README (4.5 KB)
lammps-file (123.9 KB)
@Anshul_DSP you are using etol = 0 in the in.uniform_soft script and doesnât change this value in the LAMMPS command line in the run.sh file. Since etol is the relative tolerance (see the minimize doc page), consider use some nonzero value. Also, you are using units lj, requiring 1e-10 for etol is overkill in my opinion. I tested with the 17Apr2024 (develop) with etol being 1e-8 and 1e-10, the runs proceed normally, without hanging up.
1 Like
Hi,
Thanks for having a look. This sample runs with etol 1e-8 or with 1e-10. But âhangs upâ not for 1e-12. Also, 1e-12 is not an overkill, numerically speaking, well within double precision.
Statistically speaking, some runs/samples hang up even for etol 1e-4.
Also, wherever this âhang up / infinite runâ happens, the energy is again zero, but the maxiter loop doesnât turn off. Which is a systemic trend.
Takeaway, It can possibly happen even for larger etol (for me very few runs hangup for etol = 1e-6) but changes are low since you are killing the loop far early from the minima.
Regards,
Anshul
After some careful digging, I actually think there is an issue with the line force algorithm in the special case where the energy of the system is exactly zero.
That said, this is a very peculiar situation that arises here mostly because, at least to me, this model hardly makes sense.
Iâll open a PR after some cleaning and verification of my code.
Dear Germain,
Thanks for having a look. Yes, you are correct about the issue with the linear search for the energy being exactly zero. I have been lucky/unlucky enough to face this issue in my model, but I believe this will be a positive incremental update in LAMMPS that will provide more versatility.
Based on my past work with minimisation, this loop of line search can be terminated if the energy is below 1e-200 and the same time forces ( forces- inf norm) are very small below the thrushold. Since forces can be large with zero potential energy, for example, lj at \sigma =0 in that case, this condition will not minimise.
(if I recall, the lammps work on the double float, which can store the Smallest positive DOUBLE value: 2.225E-307)
Thanks again for your time and attention.
Regards,
Anshul
It is important to note that this issue is specific to the choice ``line forcezero". The documentation implicitly recommends not using this option, unless you have a reason to believe it will work well for your system:
``The forcezero line search algorithm is similar to quadratic . It may be more efficient than quadratic on some systems."
Try using ``line quadratic", see what happens.