Reading a external file in a loop

Hi all,

I want to write an in file to achieve the following:
Assuming my system consists of two atom types A-B, and my initial structure file contains only A atoms. I want to replace one of the A atoms with a B atom and then perform some actions. The atom id of the A atoms that I intend to replace as B are saved in a file named “gb.dat”, which contains a column of numbers. The following is the in file I have written:

variable         site file gb.dat
variable         segid equal next(site)

label            loop
variable         a loop 1 73996

read_data        minimized.data extra/atom/types 1
set              atom ${segid} type 2

clear
next             a
jump             SELF loop

This in file currently has two issues: First, it can only read even-numbered lines from gb.dat, and I suspect that the issue lies with next(site). Second, I need to replace a total of 73996 A atoms (which means gb.dat contains that many lines), and I was wondering if it is possible for LAMMPS to automatically determine the number of lines in gb.dat so that I don’t have to change it manually every time.

Thanks for any help,
jiang

You can correct and simplify your input like this:

variable         site file gb.dat

label            loop

read_data        minimized.data extra/atom/types 1
set              atom ${site} type 2

clear
next             site
jump             SELF loop

Thanks, it really works!!

Hi Axel,

Previously, I ran this code on the CPU, and it worked fine. However, when I switched to the GPU, it threw an error when running the 69th loop (previously works well):

- Using acceleration for eam/alloy:
-  with 8 proc(s) per device.
--------------------------------------------------------------------------
Device 0: Tesla V100-SXM2-32GB, 80 CUs, 29/32 GB, 1.5 GHZ (Mixed Precision)
--------------------------------------------------------------------------

Initializing Device and compiling on process 0...Done.
Initializing Device 0 on core 0...Done.
Initializing Device 0 on core 1...Done.
Initializing Device 0 on core 2...Cuda driver error 709 in call at file 'geryon/nvd_texture.h' in line 84.
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[warn] Epoll MOD(1) on fd 35 failed.  Old events were 6; read change was 0 (none); write change was 2 (del): Bad file descriptor
[warn] Epoll MOD(4) on fd 35 failed.  Old events were 6; read change was 2 (del); write change was 0 (none): Bad file descriptor

I think it may be correlated to the OpenMPI or the pair potential setting?

pair_style eam/alloy/gpu
mpirun -np 8 lmp_mpi -sf gpu -pk gpu 1 -in solute.in

How can I fix this problem? I have already searched the mailing list for related issues, but there is no solution available for me.

What LAMMPS version are you using?

lammps/3Mar20

I am checking whether the problem is caused by OOM. May be one GPU card with 32GB is not enough for my system containing 500,000 atoms.

I assume this is for the same kind of input with a loop where you use read_data and clear to initialize new systems.

There are known problems with older versions of LAMMPS when trying to re-initialize the GPU support in such loops. This is a very complex issue since there are several moving parts from within the GPU package involved. There have been gradual improvements over the years, so I would recommend to install the latest feature release, 28 March 2023 and try again.

Thanks for your suggestion. If this problem is fixed by other way, not the updating version of LAMMPS, I would feedback.

32 GB is plenty and you won’t run out of GPU memory. I just ran the EAM benchmark input from the bench folder with -v x 3 -v y 3 -v z 3 which generates 864000 atoms. I have a simple Intel i5 CPU with built in GPU and that can run this without blinking an eye. My desktop has 32GB RAM shared between CPU and GPU. The GPU package indicates a use of 180MB per process (I used 4), which corresponds to less than 1GB in total.

No, upgrading LAMMPS is the only way to improve on this kind of situation.

Indeed, I tracked the memory changes during the computation, and 32GB is sufficient. It seems like I have resolved the issue, as I have exceeded the previous iteration count without any crashes so far. My solution was to set

export OMP_NUM_THREADS=8

By default, it was set to 1, I compiled LAMMPS with OpenMPI. I hope this will be helpful for others encountering the same problem. LAMMPS is a powerful software, and I should carefully examine these environment variables. Thank you very much for providing timely assistance.

This is NOT a solution, but proabably bypasses the crash by chance due to a change in memory usage patterns. You can still have corrupted memory and thus incorrect results, only that the CUDA runtime did not detect it with your hack.

For anybody using the GPU package with loops, the LAMMPS developers strongly urge to update to a LAMMPS feature release of 28 March 2023 or later due to the bugfixes that are included in that version. In fact, we have been working on identifying and fixing bugs triggered by using loops for several years. This is non-trivial and thus a slow process.

The only clean way to avoid this issue is to avoid the loop in the input script and instead write a program that will generate many input scripts (one per loop iteration) and then run them individually with mpirun.

Thank you very much for your understanding and Suggestions. I should try your clean way.