Running LAMMPS as a client code to perform Grand Canonical Monte Carlo moves and VASP as a server code

Dear Axel and Steve,

I am still working on modifying fix_gcmc to perform gcmc with LAMMPS and calculate the energy using VASP. I realized I have to modify the attempt_atomic_deletion_full() function to communicate the correct atomic coordinates to VASP. I’ve done that by basically-

· Choose random atom

· Delete the atom, calculate the energy

· If new configuration is not accepted, then create an atom of the same type at the same coordinates to restore original configuration

However I occasionally run into a type error. Sometimes when a deletion is attempted, the atom type that is communicated to VASP is a random large negative number, not the correct type. So VASP cannot create the POSCAR, and the simulation fails. My test system is a hydrogen atom in a 10x10x10 Ang^3 box. Only hydrogen atoms are exchanged.

I think this error happens when the first atom is attempted to be deleted, because when GCMC tries to delete the 2nd atom, the program successfully rejects the attempt and continues by restoring the original configuration. But I am not sure why… am I supposed to be communicating something to all the processors (with MPI_Allreduce maybe)?

I have attached the modified attempt_atomic_deletion_full() code here.

Thank you in advance for your help!

Regards,

Vrindaa

attempt_atomic_deletion_full.cpp (1.71 KB)

As I already pointed out, with the kind of changes you are doing, you are on your own to figure out how to correct any mistakes. We may be able to give some advice on how specific existing core functionality in LAMMPS works, but not how to exactly realize the details of your modifications.

The only advice that I can offer at this point, is that the symptoms you describe hint at memory corruption, i.e. accessing data that isn’t allocated or beyond what was allocated. One tool to help with identifying such issues is the memcheck tool from valgrind. Please note that when using that, you are going to be best off using MPICH as your MPI library, as OpenMPI causes many false positives due to some internal code design choices. It won’t save you from having to understand the underlying issue, but it can give you useful hints on where to look.

Axel.