Running in parallel when using loops in input file

Hi there, I just rebuilt my lammps in parallel and I’m playing around with it a bit. I’m using the following commands to run my script on our department’s mini cluster:

export OMP_NUM_THREADS=4
srun -N 5 --ntasks-per-node=3 lmp_mpi -in SBE_CuAg111_CuAdatom.in &

I have two loops in my script, the inner loop loops 500 times and the outer loop 101 times. Each outer loop writes a data file that contains 500 lines of data (one line from each of the 500 inner loops).

With my current run command, each file contains almost 8000 lines of data, instead of the expected 500. I’m assuming this is caused by the threading. Am I better off just setting the threads to 1?

I appreciate any help!

Monica

You are assuming wrong. Threading has no impact on I/O and since you are not using any suffixes on the command line you are not likely to use the thread support in LAMMPS, anyway.

Yeah, I just realized that threading is not causing the problem when I switched back to 1 thread. My files now have 10 000 lines each. When running this script in series the output works fine, 500 lines per output file. LAMMPS and Linux are still very new to me so I’m making a lot of silly mistakes due to my ignorance.

label loop2
variable CCT1 loop 0 100
variable CCT equal ${CCT1}/100
variable Conc equal ${CCT}*100
variable ao index 3.615 3.621 #I only added 2 ao’s for troubleshooting purposes

label loop1100
variable LP1100 loop 1 500


print “${xcor} ${ycor} ${No} ${Uperfect} ${Usurface} ${Conc} ${Esurface}” append SBE_Cu_${Conc}_Ag_(111)Cu.txt screen no

next LP1100
jump SELF loop1100

next CCT1
next ao
jump SELF loop2

And then I run this using

srun -N 5 --ntasks-per-node=3 lmp_mpi -in SBE_CuAg111_CuAdatom.in &

You need to check the log.lammps file if LAMMPS is actually running in parallel or whether you are just running multiple serial runs with the same input.

How to do this correctly is not really a LAMMPS problem, but something that requires correct execution of the MPI commands and may also require loading the proper environment module or similar. This is all very specific to the machine you are running on, so my recommendation is you talk to the user support of that machine to somebody that has more experience than you in running in parallel on that machine.

Ok, I got the script to run and output the correct number of lines in the txt files. Now I have a new problem. The inside loop does the following:

clears the system
creates the simulation box
minimize
compute initial PE (PEi)
adds an atom at a random coordinate on the surface
minimize
compute final PE (PEf)

All PEi outputs are consistent, but the PEf outputs are not as expected. Is this due to partitioning? Currently, I don’t have any commands in my script that specifies partitioning since I’m still trying to understand it. Is there any advice you can lend me on how to partition this system?

(You have mentioned before that the python code included in the script can be replaced with equal style variables, but I haven’t gotten around to doing that yet
SBE_CuAg111_CuAdatom.in (3.6 KB)
)

For the sake of future LAMMPS users coming across similar problem and thus searching the forum, it would be helpful if you would post the resolution of how you got it to run as expected and where the actual issue was.

If you have a new problem, you should create a new topic with a suitable subject line describing your problem. This is also to help others with similar problems to later learn from this. After all this is part of the purpose of having a public forum so that all advice given is not only for you but for the community.