[lammps-users] dump file per atom

I disagree. I have the same issue.

Defining a group is not a general solution. Programmatically defining a group for each atom is not straightforward. Besides lammps has a limit of 32 groupids (is this artificial?) which is basically nothing for even the smallest of simulations.

I don’t want lammps to process the correlations on the fly. I want the full history for inspection and for other processing purposes. Also, the correlation is better the longer the time series. If lammps stored the history for a good number of atoms, it would run out of memory.

At the same time, reading in a dump file can exceed a machine’s memory. OR, extracting one vector from the dump file takes too long.

Perhaps the dump command file arg can have a wildcard for outputting per-atom data. It makes sense for a vector to have its own file. Still that would generate a large amount of redundant dump file formatting.

In the long term, I think lammps should just stick with standard scientific file formats like hdf5 especially for outputting large data sets.

For now I’m making my scripts dump with the * wildcard to write a file for each timestep. Then, I’m going to assemble a database frame by frame. Maybe I’ll share it when I’m done.

[…]

In the long term, I think lammps should just stick with standard scientific file formats like hdf5 especially for outputting large data sets.

lammps is open source, so you are free to write an hdf5 based dump class
and contribute it to the package. if you don’t want to do it yourself, you can
always hire somebody to do it for you.

people will implement what they need and what is useful to them.

axel.

I don’t think LAMMPS can/should write a dump file per atom or
store coords for many atoms for many past timesteps. The former
would require too many open files (Linux has a modest limit), and
the latter would explode memory as you indicate.

It is pretty simple to write a post-processing script that scans
a dump file and splits it into numerous smaller files (e.g. one per atom).
You don’t need to read the entire dump file and hold it in memory to do
that. So I suggest you post-process this task.

Steve