Problems with fix print

Hello all,

I was testing out the fix print option where I notice that there seems to be a problem with formatting of the written files.

My input script looks like this:

# BF Slag MD Simulation/Pure Systems/SiO2/SiO2_1500_atoms - LAMMPS Input Script

# This script performs NPT simulation for SiO2 system at 5000K for 10 ps
#***********************************************************************

units real 					# Set units format 
boundary p p p  				# Set periodic boundary conditions
atom_style full 				# Molecular Style including charge
pair_style buck/coul/long 8 10  		# Long range couloumbic and short range Buckingham potential
kspace_style pppm 1e-4  			# For long range forces

#***********************************************************************

# Simulation variables
#***********************************************************************

variable time_step equal 1.0 								# 1 timestep is 1.0 fs
variable run_time equal  10								# Specify run time for simulation in ps		
variable run_steps equal 1000.0*${run_time}/${time_step}				# Total no. of steps in simulation run

variable th_freq equal   10000								# Frequency for calculating thermodynamic property		
variable restart_freq equal ${run_steps}/1                      			# Frequency for creating restart files


variable T equal 5000 									# Temperature of system in K

# Reading and Writing
#***********************************************************************

read_data SiO2_1500_atoms.data		 # Read atom data from data file

log SiO2_1500_atoms_5000K.log 		 # Write log file for output

# Reset timestep and Write restart commands
#***********************************************************************

reset_timestep 0 									# This resets the timestep to 0.

restart ${restart_freq} SiO2_1500_atoms_5000K 						# Write a restart file every ${restart_freq} steps


# Specify pair wise coefficient for short range forces
#***********************************************************************

pair_coeff 1 1 1843388098006.800 0.046 580.902 	     # Si-Si
pair_coeff 1 2 1157587.838 	 0.161 1067.657      # Si-O
pair_coeff 2 2 149032.853        0.276 1962.280	     # O-O	

# Intialize velocities
#***********************************************************************

velocity all create ${T} 12345 dist gaussian mom yes

# Specify timestep
#***********************************************************************

timestep ${time_step}

# Thermostating and Barostating the system
#***********************************************************************

fix 	NPT all npt temp ${T} ${T} 100 iso 1.0 1.0 1000 

# Property Calculation & Dumping
#***********************************************************************

thermo ${th_freq}


variable T equal temp
variable P equal press
variable rho equal density

compute 1 all ke
compute 2 all pe


fix 1 all print 1000 """{"timestep": $(step), "pe": $(pe), "ke": $(ke), "temp": $T, "Pressure": $P, "Density": ${rho}}""" title "" file output.json screen no  


# Output file writing
#***********************************************************************

thermo_style custom step cpu cpuremain spcpu ke pe etotal temp press density
thermo_modify flush yes

# Run Details
#***********************************************************************
run ${run_steps}

Here while printing the output.json file, the first 3 or 4 lines are printed as expected with one output per line then we run into problems. See the attached output file to understand what I mean.

I am using Lammps 23-Jun-22 verison.

Can anyone tell me where I am going wrong and resolve the error ?

Thanks in advance.
output.json (1.8 KB)
in.SiO2_1500_atoms_5000K (3.1 KB)
log.lammps (2.1 KB)
SiO2_1500_atoms_5000K.log (4.5 KB)

What platform are you running on? What is your exact command line?

Your input deck is too complex and convoluted.
Can you reproduce the same issue with adding the fix print line to one of the LAMMPS example inputs (e.g. melt or peptide) with a more frequent output (like 1 or 10) and an increase in the number of timesteps? and then post it here?

That would make it much easier to debug.

I am running it on Red-Hat Linux.

#!/bin/sh 
#PBS -N SiO2
#PBS -q workq
#PBS -l nodes=6:ppn=40
#PBS -l walltime=336:00:00
#PBS -mea
#PBS -r n
#PBS -V
######### 
module load mpiexec 
module load mpich 
module load openmpi 
module load gcc 
##########
cd $PBS_O_WORKDIR  
mpirun -np 240 /rnd_hpc_data/users/sg806515/lammps/lammps-23Jun2022/build/lmp_mpi -in in.SiO2_1500_atoms_5000K

This is what the script I use to submit my job looks like.

Yes, I will add the fix print line to the example scripts and post them here, wait.

This doesn’t make sense. You need just one MPI library.

Please look at your log file. It says,

using 40 OpenMP thread(s) per MPI task

and

  1 by 1 by 1 MPI processor grid

While your command line has:

mpirun -np 240

which should result into something like

  8 by 5 by 6 MPI processor grid

This means, that your simulations are a massive waste of resources and must be horribly slow, since you would be running a whopping 240 simulations each with 40 threads and run the exact same input and write the exact same output. That you get a corrupted file from fix print is just natural and what you see is rather modest compared to what could happen when 240 programs try to write to the same file at the same time.

What you need to do is:

  • load only one MPI library and no mpiexec package (since you don’t use it) and in particular the one that was used to compile your LAMMPS executable.
  • set the environment variable OMP_NUM_THREADS to 1
  • observe your log file output more carefully so that it matches the parallel setting you are using.

This is a case of PEBCAC.

Thank You for your input, this is the first time I am writing an PBS input file of my own.

I was about to ask the question about mpi threads.

I will implement the changes.

@akohlmey

Please pardon me for my ignorance and help me.

Say, I want run my job in a 3 3 3 processor grid. When I use the following pbs input script

#!/bin/sh 
#PBS -N SiO2
#PBS -q workq
#PBS -l select=1:ncpus=27
#PBS -l walltime=336:00:00
#PBS -mea
#PBS -r n
#PBS -V
######### 

export OMP_NUM_THREADS=1

cd $PBS_O_WORKDIR
  
mpirun -np 27 /rnd_hpc_data/users/sg806515/lammps/lammps-23Jun2022/build/lmp_mpi -in in.SiO2_1500_atoms_5000K

LAMMPS says that specified processors != physical processors

What changes do I need to make to the pbs script ?

That is a question for your HPC support folks and the documentation they have made available for how to use their systems. This is not a message from LAMMPS but from the MPI library.

Okay thank you.