I have encountered an issue in my simulation where the log file for the thermo output is missing some line breaks.
In my setup, the first one is fix nve, followed by fix nvt. For the nve section, the first missing line break appears at 300 steps and subsequently every 1800 steps. And for the nvt section, the first missing line break occurs at 500 steps, and similarly, in every subsequent 1800 steps, a line break is missing.
This causes a problem when I try to use Python tools to plot the data, as the missing line breaks result in errors.
Could you please tell me how I can fix this problem? I cannot upload the attached files since I am a new user. I can provide more details or original files if needed.
Hello Simon,
Thank you very much for your quick reply!
I think it is a similar issue. I will try the solution proposed in that pose.
Hope it will work.
I tried with the proposed solution from that post by adding thermo_modify flush yes to my input file. However, this seems to have resulted in the complete absence of line breaks in the thermo output in the log file.
I am using the pre-compiled executable downloaded from the LAMMPS official website. This problem seems to happen only when I use this particular input file. I did not see the same problem with another input file on the same cluster.
Any further suggestions would be greatly appreciated!
Hello,
Thanks very much for the reply!
Please find the attached input file and log file. The log file is not complete because I started a new simulation.
This log does not show any use of thermo_modify flush yes.
Your post is missing the data and ReaxFF potential file, so it is not possible to reproduce your calculation.
This is contrary to any expectation and observation. Those executables are used by many people and nobody has reported a behavior like this. The most likely explanation (same as for the situation in the topic you were referred to) is a flaw in the setup or configuration or implementation of the (networked?) file system you are running on. There is next to nothing that can be done on the LAMMPS side about such issues. What you didnāt mention yet, was using the -nb command line flag, which should turn off any I/O buffering in the c-library implementation of the stdio lib that LAMMPS uses for file I/O.
Another potential issue could be that you are running the non-MPI executable with srun/mpirun/mpiexec in parallel. That would be a mistake and then you would have multiple processes all writing to the same files, which can have all kinds of unexpected consequences. The preompiled Linux executables only support parallelization via threads either with OPENMP using OMP_NUM_THREADS=## and -sf omp or with KOKKOS using OMP_NUM_THREADS=## and -kokkos on t ## -sf kk, where ā##ā stands for the number of threads (for ReaxFF the latter mode is highly recommended).
Thank you very much for your comments and guidance!
Sorry for the confusion, I uploaded my previous input and log files. If thermo_modify flush yes is added to the input file. The line breaks of the thermo output are all absent. As shown in this log file log.lammps (7.9 KB).
Thanks again for your help. The issue was indeed related to my use of mpirun for parallel execution. It is already mentioned on the doc page, sorry I did not notice that before. Now I am using settings with OpenMP, and it works alright.
However, when I tried to utilize KOKKOS acceleration with env OMP_NUM_THREADS=16 lmp -kokkos on t 16 -sf kk -in in.lammps to, I got the following error message:
LAMMPS (29 Aug 2024 - Update 1)
KOKKOS mode with Kokkos version 4.3.1 is enabled (src/KOKKOS/kokkos.cpp:72)
ERROR: Multiple CPU threads are requested but Kokkos has not been compiled using a threading-enabled backend (src/KOKKOS/kokkos.cpp:197)
Last command: (unknown)
It seems this is because KOKKOS is not complied in the cluster. Could you please tell me whether this means I need to download the source code of LAMMPS and compile KOKKOS in the directory ${LAMMPSDIR}/src/KOKKOS/?
Using mpirun or mpiexec or srun requires that LAMMPS has been compiled with MPI support. Since there are several different MPI libraries that are compatible only at the source code level, it is not possible to create a port MPI parallel binary. Thus it is a mistake to use mpirun or mpiexec, since those will result in running multiple identical simulations concurrently in the same folder.
There is nothing ābeing compiledā on the cluster. You are using pre-compiled executables. In this case, it looks like the OpenMP support for KOKKOS has been omitted from that package. That should be remedied soon.
You should be doing two things:
talk to somebody that has experience in using your local cluster and get some training/understanding in how different kinds of parallelism work and how you need to compile and run parallel applications for those.
compile LAMMPS from source. The most efficient way of parallelization of LAMMPS is the MPI parallelization (see the documentation) and to run with MPI efficiently on a cluster requires compiling LAMMPS locally and from source code. Depending on what you input is and what kind of hardware you are running on and what kind of overall performance you expect and need, compiling with KOKKOS may not be needed.
Thank you very much for your detailed explanations!
I now have a better understanding that the pre-compiled executable does not require any extra libraries or packages to be compiled.
Previously, when I started learning LAMMPS, I compiled it locally with only a few packages included. However, I now realize I should obtain a deeper understanding of how different parallelization methods work. At that time, I had only tried MPI and Intel-based parallelization.
Having recently returned to LAMMPS, I see the advantage of using the pre-compiled executable ā it allows me to try new features without the immediate need to handle compilation. I plan to use the pre-compiled executable for initial tests and, if successful, eventually compile LAMMPS from the source code needs to be done.
Iām wondering if there is an easy way to check which types of parallelism are supported by the pre-compiled executable. I used lmp -help in the command line, and it displayed the following information:
Yes, lmp -help is the way. It will show as much of the compilation settings as the LAMMPS developers have figured out to display. Some are missing, but those would require a lot of effort to add.
Yes, that is one of the motivations to provide those. The other is to have a correctly compiled reference in case there are issues that require confirmation before investigating further.
You may also consider using the GUI version, since that makes experimentation even easier and includes options for instant visualization and monitoring and plotting of thermo data. 8.6.3. Using LAMMPS-GUI ā LAMMPS documentation
To add to @akohlmey 's detailed answer, a quick and easy way is also to check the number of MPI ranks used in your simulation is indicated in your header.
Both your log file headers indicate:
Reading data file ...
orthogonal box = (0 0 0) to (48.050272 83.225513 65.581266)
1 by 1 by 1 MPI processor grid
reading atoms ...
30000 atoms
read_data CPU = 0.317 seconds
Which indicates that the simulation writing to the file was using only one MPI thread. If using 16 threads, the product of the 3 MPI grid numbers must be 16. If using parallel modeling such as Parallel tempering etc., you should also consider the number of simulations running in parallel. The number of OMP threads is also indicated in the header.