fix ave/time and ave/spatial related

Hi all,

In my input script, I’ve used several times the fix ave/time & ave/spatial, so it will generate the files and should write the data on that. I have tested that input file in my linux box with 3d processor grid 422 and it is working fine and writes the computed data after each 200 steps (as defined by me), but the problem arises when I submitted that job to a linux server which is working with 3d processor grid of 2022 (processors 20 2 2). In terminal it’s confirmed that it ran 2000 steps but the files which have generated from ave/time and ave/spatial have not written anything unless the header lines. Lammps is writting the log file in rehular basis after each 500 steps (thermo 500) as i use themo_modify flush yes. Will that the file be written or there is some problem related to processor grid, i.e. the processors are getting confused ?

Please share your experience to help me out.

p.s: I’m simulating 40000000 atoms with system dimension 3622362362

forgot to mention : I’m using Lammps 26 May’13 distro

Could we have a look at your input script?

It’s the sample script which is shared by Oscar Gurrero, which I’ve modified just for my system , I’ve not added any extra fix to it. The only addition is - processors 20 2 2

in.lammps (7.29 KB)

Fix ave/time and ave/spatial each perform
a fflush() each time they write to a file,
every Nfreq steps. So it’s not buffering.


Dear Steve,

Thanks for your reply. But here we notice that the MD run have completed 5000 steps and it’s running and as well as writing log file, but not writing to the files generated from ave/spatial and ave/time. I have set the Nfreq = 200. What may be the problem when this is successfully tested in my linux box that it is writing each after 200 steps ?

May it be cause, I have a restart file which was generated by using 2222 processor grid but after getting that restart file I’m doing another run (read_restart ) by using 2022 processor grid ?
In the beginning of the run I got warning :
WARNING: Restart file used different # of processors (…/read_restart.cpp:517)
WARNING: Restart file used different 3d processor grid (…/read_restart.cpp:530)

Can it be affect ?

can’t see how that would affect fix ave/time and spatial output.
I don’t know why it isn’t working on your system.


Dear Steve and Axel,

I wounder why this problems occurs here, I have verified the problem with all possible causes.
In the linux server with 2022 MPI grid, the files writes, for a small system dimension (with 30000 atoms) with the same input script. But for the same input file and same server, same Lammps (26May2013) only the system size is different (with 400000000 atoms) using 2022 MPI grid, is not writing the files which are getting generated from ave/time and ave/spatial with two header lines, even after a considerable run (5000 step, Nfreq= 200 for ave/time and ave/spatial).

As the script is correct for a comparatively small system, so i think it should run and write for a larger system volume. Please make some suggestion which should be troubleshoot for this uncommon problem.

Thanking you

Swain Paul

Do the files appear in full when the run ends, whether
it’s a short or long run?


It’s a long run 100000 steps, i have not checked that weather the file is full or not after completing a short run, but to verify that I have terminate the job from terminal and saw that it was not written. I’m verifying the point you raised and getting back to you.

The file size also not increasing during run even after 5000 steps where Nfreq= 200 .

Thank you.

Dear steve,
I have came across the test which you suggested, I ran that job for 600 steps where Nfreq=200, but still the files are not getting written at all. But log file, trajectory file (dump) and also the final restart file are full and I’ve checked that all this files are nice and correct. It’s really confusing and I’ve tried a lot, please help me out with your precious suggestion.


Swain paul

The problem exists still, no file is written by ave/time and ave/spatial. I have checked the test for data file (read_data) and restart file (read_restart) also, but no change in the scenario. Could you please suggest me which I need to cross check ?

could you modify the bench/in.lj example with the absolute minimal
changes that would make the issue reproducible elsewhere. i.e. just
add the minimum number of computes/fixes that are needed and figure
out the number of time steps and number of atoms needed to go from
working to non-working.

if you'd then post that modified input, that would make it immensely
easier to debug this.