chunk/atom bin/3d

Dear LAMMPS users and thank you for your helps

I used the following script to extract information, but I faced with this error:

Dear LAMMPS users and thank you for your helps

I used the following script to extract information, but I faced with this
error:

#
mpirun noticed that process rank 1 with PID 3439 on node VAIO exited on
signal 9 (Killed).
--------------------------------------------------------------------------
6 total processes killed (some possibly by mpirun during cleanup)
#

this is not an error message, but just mpirun reporting that something
went wrong and it stopped all other process.

I think it is related to "chuck/atom bin/3d", because, after running steps
reaches to "fix ave/chunk" command this error will appear. ( It appears just
for "bin/3d" style, "bin/2d" prints information correctly )

that is unlikely. both, the compute and the fix will pass, if they are
syntactically correct not do much until you actually start the run.

My LAMMPS version is (LAMMPS (14 May 2016-ICMS))

My script:
#%
fix sheet Cgraphene setforce 0.0 0.0 0.0

fix fixnvtf fluidbox nvt temp 280 298.13 100.0

fix fixshake fluidbox shake 0.0001 20 10 b 1 2 3 a 2 1 t 1 2 3 4 5 m 1 2
3 4 5

timestep 2.0

compute dens fluidbox chunk/atom bin/3d x 0.0 0.5 y 0.0 0.5 z 0.0 0.5
compress yes units box

fix dens fluidbox ave/chunk 100 5 6500 dens density/number density/mass file
den.ave-chunk title1 "My output values"

dump 3 all xyz 6500 dump3_end.*.xyz
run 1000
#%

this input is incomplete and thus pretty much useless. i can add the
compute chunk/atom bin/3d and fix ave/chunk commands with out much of
a problem to the rhodo benchmark input and run the resulting code.

without the complete input deck at hand to reproduce diagnose the
issue, it will be difficult to make any recommendations. the symptom
of bin2d working and bin3d failing could be an indication of running
out of memory, but without actually seeing the input and the
corresponding output as well as knowing exactly how many processors
were used for mpirun, this is pure speculation.

axel.

thank you so much Axel

I run my simulation with 8 processors with F223/VAIO laptop with this command ( mpirun -np 8 lammps-daily < input.in)

I can’t attached all my data here. souceforge doesn’t allow me to send my files.

This is my input file: (This is a continuation of an equilibrium)

thank you so much Axel

I run my simulation with 8 processors with F223/VAIO laptop with this
command ( mpirun -np 8 lammps-daily < input.in)

how much RAM does that laptop have? 8GB? more? less?
do you also run into problems when using 4 processors?

I can't attached all my data here. souceforge doesn't allow me to send my
files.

please convert the restart file to a data file, compress it with gzip
or bzip2 and upload it to a cloud service like google drive of dropbox
and provide a link to it.

also, in order to check for whether you are simply running out of
memory, it would be sufficient to see the log file.
it would be even better, if you'd insert the command "info config"
right before the run statement.

axel.

how much RAM does that laptop have? 8GB? more? less?

My laptop has 4GB RAM.

do you also run into problems when using 4 processors?

Yes, I do. But the same result had shown.

please convert the restart file to a data file, compress it with gzip
or bzip2 and upload it to a cloud service like google drive of dropbox
and provide a link to it.​

files.zip

> how much RAM does that laptop have? 8GB? more? less?
My laptop has 4GB RAM.

> do you also run into problems when using 4 processors?
Yes, I do. But the same result had shown.

> please convert the restart file to a data file, compress it with gzip
> or bzip2 and upload it to a cloud service like google drive of dropbox
> and provide a link to it.​
files.zip
<https://drive.google.com/file/d/0BwhSnAUq3QtJeFRCZXZ2dEE3T3JfZ1RNNy1mVTJ1OUFzY2Vv/view?usp=drive_web>

​thanks for the files. you are *definitely* running out of memory.
given the box size and the choice of units, your grid spacing is *far* to
fine.
0.5 grid spacing you have a 160x160x400 point 3d mesh with over 10 million
bins
and you need a few hundred bytes storage for each bin. this way you've
exhausted your available memory with the 3d binning alone.
​please also note that the resulting output file will be equally massive.​

just use a wider grid spacing and your memory use will drop massively, i.e.
doubling the grid spacing to 1 \AA will reduce the memory need by a factor
of 8.

axel.

BTW: to further reduce memory usage, you should consider the “bound” option of compute chunk/atom. with reducing the grid to the volume of -40,40 in x and y and 0,80 in z, you are covering pretty much all of the moving atoms.

axel.

Thank you so much for your complete answer.

BTW: to further reduce memory usage, you should consider the “bound” option of >>>compute chunk/atom. with reducing the grid to the volume of -40,40 in x and y and 0,80 >>>in z, you are covering pretty much all of the moving atoms.

I pretended that by using the “compress” option, binning domain will reduce, and minimum memory will be engaged as well.

​no. on the contrary, it increases the memory use, as it requires an
additional hash table to keep track of unique ​chunk ids.

for your case, you should not use "compress yes" anyway, as you need to
have the same chunk id assigned to the same voxel, regardless of whether
there is an atom there or not.

axel.