How to debug lammps input script that generates no log or output files

I generated a data file and an input file using EMC for a lammps T_g simulation which I would like to run on a cluster.

I am trying to run a single simulation just to test things out.

However, the simulation just runs forever on the HPC, and never generates an output file or a log.lammps file that I can use to debug because I have to cancel the job after it runs for an unreasonably long time.

I have no idea what to do in this scenario to solve the issue since there’s literally nothing for me to go on. Are there things in a lammps input file that would cause this behaviour? I am pretty new to lammps, but every error I have encountered so far just aborts the simulation and prints me out a handy little error message in a log file or to the output console.
What kind of debugging protocols can I use when there is no output at all?

RunJob.sh (600 Bytes)
vlmp.data (1.6 MB)
vlmp.in (4.1 KB)
vlmp.params (16.7 KB)

It is often easier to first run a test in serial on a local machine, even if it is slow.
Since you generate the data file, you could also generate a smaller version for testing.

This is often due to running in parallel, where output and log files a block buffered for performance reasons. So you may need up to 8192 bytes of output before it is committed to disk.
You can (temporarily) work around this with the “-nb” command line option to LAMMPS that tries to turn this off, but the MPI library has precedence and might buffer anyway.

As mentioned above, the best way to go about this is to run locally. If you are uncomfortable compiling LAMMPS you can download pre-compiled non-MPI binaries from GitHub. They are good for debugging such cases and easy to use (just unpack and then use the “lmp” binary.

1 Like

This is the expected behavior. After an error LAMMPS is not in a shape to continue.

Another thought, you may want to try out LAMMPS-GUI. It should not crash on errors (if it does it is most likely a bug and you should let me know) and can report the line of the input that was last processed and then you can quickly open the corresponding documentation from the manual with a right click on the failing command and selecting the corresponding option from the context menu.

1 Like

Can also use the command line -echo both which will print the last command that it failed on.

1 Like

here’s my 2 :coin:

if you are submitting a debug job to a cluster using SLURM, then using the --unbuffered option for srun and -nonbuf option for lmp:

sbatch --account=foobar --ntasks-per-node=40 --partition=debug --time=1:00:00 --wrap “srun –unbuffered lmp -in quux.in -nonbuf” --nodes=4

will disable buffering so you wont miss errors or warnings that didnt get flushed to output as your job crashed.

the --partition=debug option is specific to the cluster im working with, jobs are limited to 4 nodes and 3 hours but run on reserved debug nodes with very short waiting times. consult your local cluster documentation and support staff if necessary.

remember to turn off --unbuffered and -nonbuf options once running your job in production, otherwise the constant unbuffered i/o will destroy any computational performance.

2 Likes

I am assuming you haven’t done the following since you have not given any info otherwise (if you have, apologies, but we have to assume you’re new to things unless proven otherwise):

  • Are you sure you need to set those OMPI settings manually?
  • Are you sure you should mpirun instead of srun?
  • Are you sure you can submit a SLURM script and rely on post-script variables $1 and $2 behaving predictably? (It might be doable – still, I’ve never tried to use those.)
  • Have you checked your SLURM --output and --error files? Those should preserve all terminal output, including MPI crash messages and LAMMPS crash messages like PPPM out of range. If those aren’t writing or have gone somewhere different that’s an issue with either the cluster or how you’re using it – not LAMMPS.
1 Like

This is literally one of the most useful things I have found all summer, and it probably saved me hours and hours of agony. Thanks so much for the fast response!
It was indeed something to do with the buffer and numerous errors were going unseen.

@srtee Thank you for assuming my ignorance. I am indeed very new :slight_smile:

For the first 3 bullets, I am not sure at all. I am trying to use similar settings to the example simulation that my HPC administrators sent me. They also sent me the code at the bottom of the slurm input script and told me to include it for restarting jobs. It doesn’t seem to make any difference whether I include it or not though.

For the last point, yes, I did check for those. They weren’t being written to, likely due to some buffer issue as @akohlmey mentioned.
I was able to chase down all the errors in my simulation and also figured out that the lammps installation on my HPC was missing some stuff. So since then I have been migrated to a new cluster and built my own lammps there and fixed all the buffer settings for debugging. I am now able to see the output, error, and log files and I have a simulation that I know runs correctly on my local lammps.

Therefore, I believe my issue has something to do with lammps restarting for no reason, or maybe the lammps/mpi interaction?

Here is an example of a simulation with all of its files that runs on my local machine but behaves weirdly with mpirun
VLmp Cluster.zip (949.6 KB)
You can see from the the output file that it reads the data like 120 times, and keeps going back to old time steps over and over. It does finish, but it takes the same amount of time as running locally (because of restarts/backtracking I guess?)

I have another example with an extremely simple input script that used to run correctly on the old lammps installation that now behaves the same way, which makes me suspect that it is not my input script
LAMMPS-test 15-Aug.zip (4.6 MB)

Any suggestions for how to proceed?
(It’s hard to tell if this is a different issue or the same issue as before, so let me know if I should make a new thread)

It is a different issue.

When you are running LAMMPS, always pay attention to the log. There is lots of information that can confirm if things are done correctly.

For example, your cluster log file has:

LAMMPS (27 Jun 2024 - Development - patch_27Jun2024-612-gaa0b6c47c2)
  using 4 OpenMP thread(s) per MPI task
Reading data file ...
  orthogonal box = (0 0 0) to (39.753715 39.753715 39.753715)
  1 by 1 by 1 MPI processor grid

But all output is repeated 4 times. That means, you are not initializing MPI correctly and are running 4x 1 MPI tasks with 4 OpenMP threads.

Your job script requests:

#SBATCH --ntasks=60
#SBATCH --cpus-per-task=1     

But also has:

export OMP_NUM_THREADS=4

and

mpirun -n 60 lmp -i vlmp.in

Which is blatantly inconsistent and a general big mess. You need to properly learn how to submit and run batch jobs and make them consistent. Most of this is outside the scope of this forum.

Some points:

  • with slum you usually use srun instead of mpirun/mpiexec
  • you don’t need to specify number MPI tasks and OpenMP threads since you already did with the request. At best you need OpenMP and its process affinity
  • you are explicitly telling OpenMPI to bind process to cores, but request OpenMP threads. that will slow you down since the OpenMP threads are confined to a single CPU core, which is confirmed by the CPU usage being < 100%. With 4 threads it should be over
  • you are setting to use OpenMP threads, but you are not using any multi-thread enabled styles in LAMMPS
  • your locally run jobs is the same kind of mess only worse, since this is fully running 60 times the same calculation.

You need help from some local person familiar with compiling and running parallel jobs with MPI
and how to run jobs on a SLURM cluster.

1 Like

You can get a very brief overview of using MPI and OpenMP here: density functional theory - What factors could cause a calculation to run successfully on a laptop but encounter issues on an HPC system? - Matter Modeling Stack Exchange

1 Like