I’ve been working on a new compute for the past few weeks that may be of interest to the community. The purpose of this compute is to read per chunk values such as XCM/VCM/FCM from a global array and convert them into per-atom vectors.
As I’ve mentioned in my previous emails, such a compute was found to be necessary in order to hold the COM’s of multiple chunks stationary while still allowing dynamics
of the constituent atoms.
I’ve written a compute (expandchunk/atom) to do this, with my limited knowledge of the LAMMPS source code, and it is attached to this email. The syntax is as follows.
compute cc1 all chunk/atom molecule # generates chunk ids (part of regular distribution)
compute myChunkFCOM all fcm/chunk cc1 # generates global array of size nchunk*3 (fcm/chunk is new, attached here)
compute chF all expandchunk/atom cc1 c_myChunkFCOM c_myChunkFCOM c_myChunkFCOM
(# reads global chunk array columns and converts it into per-atom vectors) (expandchunk/atom is new, attached here)
This compute works well for small systems, but in parallel I run into memory problems, particularly when I transition from one PHASE to the next,
as I have explained in the following section.
compute_fcm_chunk.cpp (6.9 KB)
compute_fcm_chunk.h (1.76 KB)
compute_expand_chunk_atom.cpp (10.5 KB)
compute_expand_chunk_atom.h (1.68 KB)
fix_addvelocity.cpp (10.6 KB)
fix_addvelocity.h (2.18 KB)