Dear all:
I have come across a little problem when I’m writing an in-time visualization script for my lammps. I follow the example in lammps source file like this:
# ubuntu 18.04 python 3.8
# lammps (30 Oct 2019)
me = MPI.COMM_WORLD.Get_rank()
nprocs = MPI.COMM_WORLD.Get_size()
lmp = lammps()
lmp.file(infile)
lmp.command(f'thermo {nfreq}')
lmp.command('run 0 pre yes post no')
value = lmp.get_thermo('temp')
ntimestep = 0
xaxis = [ntimestep]
yaxis = [value]
if me == 0:
print('start excu')
while ntimestep < nsteps:
lmp.command(f'run {nfreq} pre no post no')
ntimestep += nfreq
value = lmp.get_thermo('temp')
xaxis.append(ntimestep)
yaxis.append(value)
if me == 0:
**print(xaxis[-1], yaxis[-1])**
time.sleep(10)
lmp.command('run 0 pre no post yes')
what I think is that every time I start a new run command, it prints the thermo once. When I use the “python3 test.py” it works. But when i use “mpirun -np x python3 test.py” to run parallelly, it not worked. The simulation would not print until all run commands done and then print all the information at one time.
Step Temp E_pair E_mol TotEng Press
70 0.95902458 -2.1166806 0 -1.1587216 5.2854327
80 0.9681728 -2.1243507 0 -1.1572536 5.2783599
Loop time of 0.000994325 on 4 procs for 10 steps with 900 atoms
Step Temp E_pair E_mol TotEng Press
80 0.9681728 -2.1243507 0 -1.1572536 5.2783599
90 1.0031444 -2.158601 0 -1.1565712 5.1192585
Loop time of 0.00101799 on 4 procs for 10 steps with 900 atoms
Step Temp E_pair E_mol TotEng Press
90 1.0031444 -2.158601 0 -1.1565712 5.1192585
100 0.98477194 -2.1394564 0 -1.1557786 5.2407334
Loop time of 0.000956833 on 4 procs for 10 steps with 900 atoms
Step Temp E_pair E_mol TotEng Press
100 0.98477194 -2.1394564 0 -1.1557786 5.2407334
Loop time of 7.689e-06 on 4 procs for 0 steps with 900 atoms
0.0% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
This is expected behavior and has nothing to do with LAMMPS, but is a “feature” of your MPI library.
When running in parallel, your LAMMPS executables (or python scripts) are no longer connected to the terminal console, but via a “pipe” to the mpirun command. The standard behavior of the C I/O library is that any output to the console is “line buffered”, but other output is “block buffered”. So using mpirun, makes your screen output block buffered (and a block is typically 4kB).
Some MPI libraries in some cases, will use some tricks to change this back, but overall, it is not a good idea since synchronizing output on a per-line basis, makes things much less efficient and is in particular for parallel execution not a good idea. You will still get the complete output.
Sorry to bother you again. Is there any possibility to create a parallel lammps instance from a serial python script? In another word, if I want to start my python script just by “python client.py”, in this script I also want to invoke a simulation which equals to “mpirun -np x”, and returns an instance like “lmp = lammps()” for next data processing.
I looked up for many forums and mpi4py manual, it seems the mpi4py dynamic process management can achieve this, but it seems more complex. So I terrifyingly ask you again for a simple solution!
Sorry to bother you again. Is there any possibility to create a parallel lammps instance from a serial python script? In another word, if I want to start my python script just by “python client.py”, in this script I also want to invoke a simulation which equals to “mpirun -np x”, and returns an instance like “lmp = lammps()” for next data processing.
no. anything making use of MPI must be launched by mpirun. you have to use MPI programming inside to python script to have it skip parts that you don’t want to execute on all MPI ranks. all calls to the LAMMPS module have to be done by all MPI processes.
I looked up for many forums and mpi4py manual, it seems the mpi4py dynamic process management can achieve this, but it seems more complex. So I terrifyingly ask you again for a simple solution!
The only simpler solution would be to not use MPI parallelism but OpenMP only, either through the KOKKOS package or USER-OMP or USER-INTEL. That however requires changes to the LAMMPS input or commands and may require setting package command flags/arguments to enable the packages and/or the desired number of threads.
You are asking for something “complex”, so there are no “simple” ways to do it.