I encountered an issue where the LAMMPS calculation was terminated when using scientific notation in the in
file. However, after changing the parameters to integer or decimal form, the calculation runs normally. I’m sure these parameters should be able to use scientific notation. Could this problem be related to the LAMMPS installation? What solutions could there be? I kindly ask for advice from the experts.
Nobody will know until you provide us with the exact details of the command that failed and the reason why you believe it is correct (i.e. the corresponding documentation).
P.S.: and don’t forget to also report your LAMMPS version.
@jingfang If you want your problem to be resolved, you have to help us to reproduce it. And we can only reproduce it, if you provide the details of what errors you encountered where and how you got there.
The error occurred when I was doing the neb calculation. Here is the in.lammps file.
######################################################################
units metal
dimension 3
boundary p p f
atom_style atomic
atom_modify map array sort 0 0.0
box tilt large
variable u uloop 28
read_data init.lmp
pair_style eam/alloy
pair_coeff * * ./Zope-Ti-Al-2003.eam.alloy Ti
neighbor 0.3 bin
timestep 0.01
region bottom block INF INF INF INF INF -47
region upper block INF INF INF INF 47 INF
group group_bottom region bottom
group group_mobile subtract all group_bottom
fix freeze_bottom group_bottom setforce 0 0 0
fix neb_relax group_mobile neb 1 parallel ideal perp 1
thermo 100
thermo_style custom step temp pe ke etotal press vol pxx pyy pzz pxy pxz pyz lx ly lz xlo xhi ylo yhi zlo zhi fnorm fmax
thermo_modify lost error flush yes format 5 %.10f
dump atom_dump all custom 1000 dump.all.${u}.* id xu yu zu
dump_modify atom_dump sort id
restart 1000 neb.restart
min_style quickmin
neb 1.0e-10 1.0e-4 1e+6 1e+6 1000 final coords.final
write_dump all custom dump.final.${u} id xu yu zu modify sort id
##################################################################
Here is the output file, the job stopped in several seconds.
#################################################
Loading mpi/oneAPI/2022.1
Loading requirement: oneAPI/2022.1
LAMMPS (29 Aug 2024)
Running on 33 partitions of processors
Abort(1) on node 3 (rank 3 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3
Abort(1) on node 6 (rank 6 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6
Abort(1) on node 10 (rank 10 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 10
Abort(1) on node 11 (rank 11 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 11
Abort(1) on node 13 (rank 13 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 13
Abort(1) on node 23 (rank 23 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 23
Abort(1) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
Abort(1) on node 1 (rank 1 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
Abort(1) on node 2 (rank 2 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2
Abort(1) on node 4 (rank 4 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4
Abort(1) on node 5 (rank 5 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5
Abort(1) on node 7 (rank 7 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7
Abort(1) on node 8 (rank 8 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 8
Abort(1) on node 9 (rank 9 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 9
Abort(1) on node 12 (rank 12 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12
Abort(1) on node 14 (rank 14 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14
Abort(1) on node 15 (rank 15 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15
Abort(1) on node 16 (rank 16 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 16
Abort(1) on node 17 (rank 17 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17
Abort(1) on node 18 (rank 18 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 18
Abort(1) on node 19 (rank 19 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 19
Abort(1) on node 20 (rank 20 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 20
Abort(1) on node 21 (rank 21 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 21
Abort(1) on node 22 (rank 22 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 22
Abort(1) on node 24 (rank 24 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 24
Abort(1) on node 26 (rank 26 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 26
Abort(1) on node 27 (rank 27 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 27
Abort(1) on node 28 (rank 28 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 28
Abort(1) on node 29 (rank 29 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 29
Abort(1) on node 30 (rank 30 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 30
Abort(1) on node 31 (rank 31 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 31
Abort(1) on node 32 (rank 32 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 32
###################################################
Thank you very much for your prompt reply and guidance. The details of the calculation are already shown in another reply. Many thanks.
The input in the other post is useless. I cannot reproduce your issue by re-running because I don’t have your data file or potential file, and since this is a multi-partition run and you don’t provide the output, I cannot know where exactly your calculation is stopping from remote.
Try to create a minimal input that does have only the minimal amount of commands to reproduce the error and specifically a version that does not require a multi-partition run. For example by modifying one of the example inputs in the bench folder or in the examples folders (like melt or peptide). If using NEB is required, then adapt one of the neb examples.
I am sorry I am a new user and I cannot upload files. There is no output lines in the log.lammps (log.lammps.0, …, log.lammps.32) file, it is empty. I have tried the neb example provided in the github(lammps/examples/neb at develop · lammps/lammps · GitHub), and the calculation is running successfully.
I just tried modifying this line “neb 1.0e-10 1.0e-4 1e+6 1e+6 1000 final coords.final” to “neb 0.0000000001 0.0001 1000000 1000000 1000 final coords.final”", then the calculation can be done. But it is quite strange that I can do a simple minimization calculation like this “min_style cg
minimize 1e-12 1e-12 10000 10000”, this scientific notation does not influence the molecular static simulation.
Dear professor, I just emailed you all files of my calculation.
I am using the latest lammps version.
“Loading mpi/oneAPI/2022.1
Loading requirement: oneAPI/2022.1
LAMMPS (29 Aug 2024)”
This is incorrect. While the first two numbers represent stopping tolerance for energy and maximum force (same as for minimize as mentioned below), the two arguments where you use 1e+6 as numbers are integers (they represent numbers of timesteps) and thus must not be represented by a floating point format (which is the scientific representation). Before we added the check to LAMMPS the c-library call to convert these numbers to integers would result in producing a 1 without a warning or error which is clearly not the desired behavior. Now LAMMPS checks whether integers are actually represented as integers.
That is because the two 1e-12 numbers represent the stopping tolerance for the total energy and the maximum force, both of which are floating point numbers and thus the scientific input is acceptable and will be converted correctly and the timestep related numbers (100000) are correctly represented as valid integers.
There is a known workaround for using scientific numbers in an integer context, provided the numbers are not too large: if you put a so-called “immediate variable evaluation” in place of the plain number, LAMMPS will convert the floating point number to an integer as part of the variable evaluation which happens in the pre-processing step before the neb command is called. This is assuming the number can be represented as an integer, of course).
So the following should work:
neb 1.0e-10 1.0e-4 $(1e+6) $(1e+6) 1000 final coords.final
Because this will be converted during preprocessing to
neb 1.0e-10 1.0e-4 1000000 1000000 1000 final coords.final
This “hack” should work for numbers up to about 1.0 \cdot 10^{20}
Please do NOT do this until I (or somebody else) specifically asks you for it.
It is considered rude to bombard people with files they don’t want or need to see.
The better approach is to upload them to some service like google drive or one drive
or dropbox or similar and then provide a link to it so that people can choose whether
to download or not.
Your explanation of what you did was sufficient to resolve the issue, anyway and to confirm that this is a user input error and not a LAMMPS error.
Got it. Thanks. It won’t happen in the future.
I appreciate your guidance and feedback. The problem is now solved. Thank you very much and I am really sorry for bothering you with the zip file attached with the email. I learned a lot both on the correct use of lammps and the way of communication.