Rotate doesn't work

Want to create a Z=0 plane. Can’t do it with the following script

units si
atom_style atomic

boundary p p p

variable x equal 1e-05
variable y equal 1e-05
variable z equal 1e-05

region box block 0 ${x} 0 ${y} 0 ${z}
create_box 1 box

lattice     hcp 1
# create plane
create_atoms 1 region box

group gac_floor region box

displace_atoms gac_floor rotate 0 0 0 1 0 0 90

shell mkdir vtk
dump 1 all vtk 1 vtk/dump*.vtu id type

The create_atoms creates a Y=0 plane. Why isn’t the rotate command moving, from the (0,0,0) point, through the (1,0,0) axis, 90 degrees to the Z=0 plane?

mmmm no, it creates a single atom :

LAMMPS (23 Jun 2022 - Update 1)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
Created orthogonal box = (0 0 0) to (1e-05 1e-05 1e-05)
1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 1 1.7320508 1.6329932
Created 1 atoms

No, the create_atoms command will fill the entire simulation box. Since the provided lattice spacing is huge compared to the dimensions of the simulation cell, there will be only the atom at the origin created. That atom will be rotated, but since it is a single point and you rotate around the origin, nothing will change.

So to create a plane of atoms, you need to define a different region for it and apply a suitable lattice constant for your choice of units and box dimensions.

Looks like I hadn’t noticed my scripts were not writting new VTK outputs. Which lends me to ask, why is this script not writting VTK files?

units si
atom_style atomic

boundary p p f

variable x equal 1e-05
variable y equal 1e-05
variable z equal 1e-05

region box block 0 ${x} 0 ${y} 0 ${z}
create_box 1 box

lattice     hcp 1e-07
# create plane
create_atoms 1 region box

group           gac   type 1               # assign type 1 atoms to gac group

dump du1 all vtk 1 vtk/dump*.vtu id type   # this seems to not be working now

It seems to be creating multiple atoms now, likely by having adjusted the lattice (thank you @akohlmey !)

LAMMPS (29 Oct 2020)
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (1.0000000e-05 1.0000000e-05 1.0000000e-05)
  1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 1.0000000e-07 1.7320508e-07 1.6329932e-07
Created 1420700 atoms
  create_atoms CPU = 0.260 seconds
1420700 atoms in group gac
Total wall time: 0:00:00
LAMMPS (29 Oct 2020)
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (1.0000000e-05 1.0000000e-05 1.0000000e-05)
  1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 1.0000000e-07 1.7320508e-07 1.6329932e-07
Created 1420700 atoms
  create_atoms CPU = 0.262 seconds
1420700 atoms in group gac
Total wall time: 0:00:00
LAMMPS (29 Oct 2020)
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (1.0000000e-05 1.0000000e-05 1.0000000e-05)
  1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 1.0000000e-07 1.7320508e-07 1.6329932e-07
Created 1420700 atoms
  create_atoms CPU = 0.261 seconds
1420700 atoms in group gac
Total wall time: 0:00:00
LAMMPS (29 Oct 2020)
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (1.0000000e-05 1.0000000e-05 1.0000000e-05)
  1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 1.0000000e-07 1.7320508e-07 1.6329932e-07
Created 1420700 atoms
  create_atoms CPU = 0.290 seconds
1420700 atoms in group gac
Total wall time: 0:00:00

Not sure. What platform are you running on?

For simplicity, I would switch to a simpler, human readable dump format for now that is readable by common molecular visualization tools. I also would make the system much smaller.
It is better to solve one problem at a time. First you want to figure out how to build the geometry you want to have, then you can worry about a bigger system and using a different file format for dumps.

There are more problems here:

  • you are using a LAMMPS version that is over two years old. If you run into any bugs, nobody will want to debug them until you can confirm, they still exist in the current version. Also, you won’t get the benefit of any improvements and bugfixes that have been added to LAMMPS in the last two years.
  • you are getting the same output multiple times. That happens when you use mpirun/mpiexec on an executable that has not been compiled with MPI support or links to a different MPI library than what your mpirun/mpiexec command is provided by.

You’re not getting any dump files because you’re not running any MD steps with the “run” command.

@Michael_Jacobs you are right, I was missing the run command to get the VTK outputs! Besides some other noob mistakes, that the developers of LAMMPS warned me about. Thank you all for the very helpful input!