I would like to know if it possible to group atoms in a way that persists across simulation restarts. I was previously using “compute chunk/atom” to spatially bin my simulation after thermalization but now need to bin before thermalization when the atoms are perfectly ordered instead. The problem is I checkpoint the simulation after I bring it up to temperature and lose the chunk info on restart. I tried my old workflow but the ChunkID’s don’t persist across restarts using write_data/read_data, and while they are recorded in my dump files, I don’t see that they can be read back in.
I think the most robust way to get back to my previous workflow is to create an atom type for each of the chunkID’s I was working with previously and then use “compute chunk/atom” with the type keyword instead. Is this a sensible way to approach this? Is there a better solution I am not aware of?
I think there would be a better solution: write the chunk info into a dump file, modify it so it can be used for an atomfile style variable and then use that file/variable to define the chunks after the restart.
Here is an example for how to set this up:
compute cchunk all chunk/atom bin/1d x lower 2.0 nchunk once
run 0 post no
write_dump all custom dump.chunks id c_cchunk
print "$(atoms)" file atomfile.chunks
shell tail -$(atoms) dump.chunks >> atomfile.chunks
(note, this requires a unix-like machine since “tail” is shell command. The purpose is to create a file that has the number of atoms as the first line and then the last natoms from the dump file that have the atomID / chunkID info).
And here is an example for how to use that data:
variable chunks atomfile atomfile.chunks
compute cold all chunk/atom v_chunks
This does look much cleaner to implement than using several additional atom types. It looks like it is doing exactly what I needed in my small test system. Thanks for sharing your expertise.
For those who find this post later, I had to implement the “fix store/state” solution from this post to successfully write the dump.
This won’t be necessary with the current LAMMPS development code and the next feature release.