Thermo Output Missing Line Breaks in LAMMPS Log File

Hello LAMMPS users,

I have encountered an issue in my simulation where the log file for the thermo output is missing some line breaks.

In my setup, the first one is fix nve, followed by fix nvt. For the nve section, the first missing line break appears at 300 steps and subsequently every 1800 steps. And for the nvt section, the first missing line break occurs at 500 steps, and similarly, in every subsequent 1800 steps, a line break is missing.

This causes a problem when I try to use Python tools to plot the data, as the missing line breaks result in errors.

Could you please tell me how I can fix this problem? I cannot upload the attached files since I am a new user. I can provide more details or original files if needed.

Hello,

Does your issue resemble the one discussed here?

If so, you might consider trying the solution proposed by @akohlmey.

Simon

1 Like

Hello Simon,
Thank you very much for your quick reply!
I think it is a similar issue. I will try the solution proposed in that pose.
Hope it will work.

Hello Simon,

I tried with the proposed solution from that post by adding thermo_modify flush yes to my input file. However, this seems to have resulted in the complete absence of line breaks in the thermo output in the log file.

  Step          Temp          TotEng         Press     
         0   10            -4298152.1      297960.9             100   165.86836     -4298163.5     -9068.9888            200   175.98555     -4298152.6     -20827.329            300   12.120287     -4298120.6     -19592.66             400   159.79953     -4298134.7     -21581.879            500   186.71937     -4298124.1     -22518.775            600   15.893855     -4298093.8     -20448.835            700   152.66135     -4298109.4     -15994.69             800   196.29848     -4298097.8     -18903.742            900   20.828081     -4298069.3     -18533.638           1000   144.52276     -4298086.6     -16262.992           1100   205.22177     -4298074.2     -19865.128           1200   26.342074     -4298046.6     -18080.256           1300   136.10502     -4298064       -15558.561           1400   212.6433      -4298050.8     -22192.174           1500   32.561202     -4298025.2     -20084.663           1600   127.46348     -4298043.5     -18671.823           1700   219.41321     -4298029.6     -20214.147           1800   39.994219     -4298004.5     -19801.283           1900   118.45303     -4298023.2     -15936.39            2000   225.62305     -4298008.2     -21376.627           2100   47.943471     -4297985.3     -18000.88            2200   109.67844     -4298005.1     -16369.708           2300   230.04486     -4297989.5     -25690.288           2400   56.681343     -4297967.6     -21906.934           2500   100.86031     -4297986.7     -17763.57            2600   233.70914     -4297969.7     -15677.985           2700   65.818773     -4297948.2     -22079.926           2800   92.659969     -4297967.6     -15208.216           2900   236.17988     -4297950.1     -22301.937           3000   76.325915     -4297930.4     -19450.273           3100   84.492582     -4297950.5     -13877.422           3200   237.27521     -4297931.8     -22835.096           3300   87.150377     -4297913       -15527.461           3400   77.75255      -4297932.2     -20180.332           3500   237.86476     -4297912.6     -21179.867           3600   97.909727     -4297895.4     -20479.597           3700   70.367693     -4297914.9     -13051.487           3800   236.80684     -4297894       -27720.972           3900   109.52448     -4297878.3     -23234.689           4000   64.192869     -4297897.8     -9501.2369 

I am using the pre-compiled executable downloaded from the LAMMPS official website. This problem seems to happen only when I use this particular input file. I did not see the same problem with another input file on the same cluster.

Any further suggestions would be greatly appreciated!

Please post the input file here (if you cannot upload, you can paste the raw input text and use backticks to enable verbatim formatting, like below:)

```
${verbatim formatting, tada!}
```

Hello,
Thanks very much for the reply!
Please find the attached input file and log file. The log file is not complete because I started a new simulation.

readdata.in.lammps (1.8 KB)
log.lammps (8.7 KB)

The line breaks are missing for thermo output at 300, 2100 and 3900 steps in the log file.
Any suggestions would be greatly appreciated!

This log does not show any use of thermo_modify flush yes.

Your post is missing the data and ReaxFF potential file, so it is not possible to reproduce your calculation.

This is contrary to any expectation and observation. Those executables are used by many people and nobody has reported a behavior like this. The most likely explanation (same as for the situation in the topic you were referred to) is a flaw in the setup or configuration or implementation of the (networked?) file system you are running on. There is next to nothing that can be done on the LAMMPS side about such issues. What you didnā€™t mention yet, was using the -nb command line flag, which should turn off any I/O buffering in the c-library implementation of the stdio lib that LAMMPS uses for file I/O.

Another potential issue could be that you are running the non-MPI executable with srun/mpirun/mpiexec in parallel. That would be a mistake and then you would have multiple processes all writing to the same files, which can have all kinds of unexpected consequences. The preompiled Linux executables only support parallelization via threads either with OPENMP using OMP_NUM_THREADS=## and -sf omp or with KOKKOS using OMP_NUM_THREADS=## and -kokkos on t ## -sf kk, where ā€˜##ā€™ stands for the number of threads (for ReaxFF the latter mode is highly recommended).

1 Like

Dear Axel,

Thank you very much for your comments and guidance!

Sorry for the confusion, I uploaded my previous input and log files. If thermo_modify flush yes is added to the input file. The line breaks of the thermo output are all absent. As shown in this log file
log.lammps (7.9 KB).

Thanks again for your help. The issue was indeed related to my use of mpirun for parallel execution. It is already mentioned on the doc page, sorry I did not notice that before. Now I am using settings with OpenMP, and it works alright.

However, when I tried to utilize KOKKOS acceleration with env OMP_NUM_THREADS=16 lmp -kokkos on t 16 -sf kk -in in.lammps to, I got the following error message:

LAMMPS (29 Aug 2024 - Update 1)
KOKKOS mode with Kokkos version 4.3.1 is enabled (src/KOKKOS/kokkos.cpp:72)
ERROR: Multiple CPU threads are requested but Kokkos has not been compiled using a threading-enabled backend (src/KOKKOS/kokkos.cpp:197)
Last command: (unknown)

It seems this is because KOKKOS is not complied in the cluster. Could you please tell me whether this means I need to download the source code of LAMMPS and compile KOKKOS in the directory ${LAMMPSDIR}/src/KOKKOS/?

Thank you very much for your time and assistance!

Using mpirun or mpiexec or srun requires that LAMMPS has been compiled with MPI support. Since there are several different MPI libraries that are compatible only at the source code level, it is not possible to create a port MPI parallel binary. Thus it is a mistake to use mpirun or mpiexec, since those will result in running multiple identical simulations concurrently in the same folder.

There is nothing ā€œbeing compiledā€ on the cluster. You are using pre-compiled executables. In this case, it looks like the OpenMP support for KOKKOS has been omitted from that package. That should be remedied soon.

You should be doing two things:

  1. talk to somebody that has experience in using your local cluster and get some training/understanding in how different kinds of parallelism work and how you need to compile and run parallel applications for those.
  2. compile LAMMPS from source. The most efficient way of parallelization of LAMMPS is the MPI parallelization (see the documentation) and to run with MPI efficiently on a cluster requires compiling LAMMPS locally and from source code. Depending on what you input is and what kind of hardware you are running on and what kind of overall performance you expect and need, compiling with KOKKOS may not be needed.

Dear Axel,

Thank you very much for your detailed explanations!

I now have a better understanding that the pre-compiled executable does not require any extra libraries or packages to be compiled.

Previously, when I started learning LAMMPS, I compiled it locally with only a few packages included. However, I now realize I should obtain a deeper understanding of how different parallelization methods work. At that time, I had only tried MPI and Intel-based parallelization.

Having recently returned to LAMMPS, I see the advantage of using the pre-compiled executable ā€” it allows me to try new features without the immediate need to handle compilation. I plan to use the pre-compiled executable for initial tests and, if successful, eventually compile LAMMPS from the source code needs to be done.

Iā€™m wondering if there is an easy way to check which types of parallelism are supported by the pre-compiled executable. I used lmp -help in the command line, and it displayed the following information:

KOKKOS package API: Serial
KOKKOS package precision: double
Kokkos library version: 4.3.1
OPENMP package API: OpenMP
OPENMP package precision: double
OpenMP standard: OpenMP 4.5
INTEL package API: OpenMP
INTEL package precision: single mixed double
INTEL package SIMD: not enabled

Does this output mean that the executable only supports OpenMP-based parallelization if it explicitly states XXX package API: OpenMP?

Thank you very much for your time and support!

Yes.

You can also see that MPI is not supported from this:

MPI v1.0: LAMMPS MPI STUBS for LAMMPS version 29 Aug 2024

Since it otherwise would show which MPI implementation and version is in use, for example:

MPI v4.0: MPICH Version:      4.1.2
MPICH Release date: Wed Jun  7 15:22:45 CDT 2023
MPICH ABI:          15:1:3

Yes, lmp -help is the way. It will show as much of the compilation settings as the LAMMPS developers have figured out to display. Some are missing, but those would require a lot of effort to add.

Yes, that is one of the motivations to provide those. The other is to have a correctly compiled reference in case there are issues that require confirmation before investigating further.

You may also consider using the GUI version, since that makes experimentation even easier and includes options for instant visualization and monitoring and plotting of thermo data. 8.6.3. Using LAMMPS-GUI ā€” LAMMPS documentation

To add to @akohlmey 's detailed answer, a quick and easy way is also to check the number of MPI ranks used in your simulation is indicated in your header.

Both your log file headers indicate:

Reading data file ...
  orthogonal box = (0 0 0) to (48.050272 83.225513 65.581266)
  1 by 1 by 1 MPI processor grid
  reading atoms ...
  30000 atoms
  read_data CPU = 0.317 seconds

Which indicates that the simulation writing to the file was using only one MPI thread. If using 16 threads, the product of the 3 MPI grid numbers must be 16. If using parallel modeling such as Parallel tempering etc., you should also consider the number of simulations running in parallel. The number of OMP threads is also indicated in the header.

Dear Axel,

Thank you very much for your detailed reply!

I appreciate all your help and the time very much. Your comments are extremely valuable.

Thanks again!

Hello,

Thank you very much for the information!