Dear LAMMPS developers,
I am running latest version of LAMMPS (8 Feb 2023) on a workstation with 40 physical cores and Ubuntu and really appreciate it if you could help me to resolve the issues that I listed below.
I am using the following styles to model a Perovskite material:
special_bonds amber
pair_style hybrid buck/coul/long 15.0 15.0 lj/cut/coul/long 15.0 15.0
pair_modify shift yes mix arithmetic
bond_style harmonic
angle_style harmonic
dihedral_style opls
kspace_style pppm 0.0001
- I want to speed up the run by using verlet/split, but by defining the following commends I get an error
mpirun -np 40 --bind-to socket --map-by socket “./lmp” -in in.init
.
.
.
package omp 1 neigh yes
suffix omp
partition yes 1 processors 5 3 2
partition yes 2 processors 5 1 2
processors * * * part 1 2 multiple
ERROR: Specified processors != physical processors (src/comm.cpp:420)
I couldn’t figure out what I’m doing wrong here.
- I am using full periodic boundary condition to simulate a very thin slab of a material with free surfaces being normal to y direction and inserting two vacuum regions in this direction to seperate the top and bottom surfaces in periodic images. According to the documentation, this is a bad idea, and I might need to use “kspace_modify slab 3.0”. So my questions are
** Is the direction for this commend strictly z? Should I change the coordinates of my sample from y to z for that?
** The documentation says “p p f” should be used for this command. It would be very problematic if I want to compress or stretch the slab by having fixed boundary condition along the nonperiodic direction. Is there any way to have shrink-wrapped boundary condition here?
Thanks!
Do you have any indication that your simulation would be helped by this? Since you are running with only 40 MPI ranks at the most, I seriously doubt that. And you could have the same effect but with less waste through running 20 MPI ranks with 2 OpenMP threads each or 10 MPI ranks with 4 threads each, since you are running on a single node.
Almost all the commands you quote are in contradiction to what the documentation says you have to do for running with verlet/split, so it is not surprising at all that you get errors. In part it looks like you are making up stuff that has no correlation to what the documented behavior is at all, but rather looks more like what you want those commands to do. Software doesn’t work like that.
yes.
That makes no sense at all. If you want to compress or stretch a system in the direction where you have free surfaces, you cannot do that by changing the box, anyway. You have to either push atoms together or apart by adding forces to the surface and near surface region atoms or in the case of compression, you may use fix indent or a fix wall with a moving position. At any rate, with the slab configuration and using the Poisson solver to decouple the periodic images, the distance between the periodic images of the slab must be preserved or else your forces will be bogus because the Poisson solver will no longer converge.
Thanks a lot Axel for the reply!
I just wanted to test verlet/split on a small sample, but I do not understand how to use the appropriate commends. Could you please kindly tell me what commends I need to use for the verlet/split? I read the documentation several times, but I mess up every time that I lunch the simulation.
For the second comment, I was hoping to create a rough wall with shrink wrap boundary condition and avoid wasting cores on vacuum regions. But as you said, that is not an option, and I have to consider a large volume of vacuums when I want to stretch the slab, so the atoms cannot pass the fixed boundary during tensile test, am I correct? (fix balance might help here?)
What you have to do is described in the documentation. That is what I would have to tell you. I don’t see how repeating the documentation here would make a difference.
You are mixing and conflating many things that are too much of a hassle for me to disentangle.
Most of all you seem to have a “condition” that I like to call “premature optimization”. You should be first and foremost concerned about getting the correct results rather than wasting resources. At the same time your difficulties of extracting useful information from the documentation and your concern about using advanced methods before being competent with the basics of how LAMMPS operates and sets up systems makes it extremely difficult to provide meaningful advice. The advice I can give would be very specific and technical, but since you are struggling with rather simple things, there is no chance that this will be useful and thus you would be wasting my and your time.
My recommendation thus remains, use MPI plus OpenMP and don’t worry about an exotic feature that is only meaningful for very large scale calculations. If the domain decomposition worries you, there are two things you have to keep in mind. For PPPM it doesn’t matter whether you are processing vacuum or not, you still have to compute the field and process the grid points (PPPM is a grid based method and uses a uniform grid throughout the entire box), so fix balance cannot help you with that. And second, the simple approach to use processors * * 2
will just work fine if make certain your slab is properly centered. Again, this will be sufficient for all but the most extreme cases and also works well with MPI+OpenMP.
Many thanks Axel!
I will definitely follow your suggestion.