segmentation fault and building regions

I am running LAMMPS 3-March-2020 on a machine with Ubuntu 18.04 and 8 cores.

I’m trying to create a simulation box that contains one or two spheres inside it. I’ve been able to get LAMMPS to run with the following script sequence for a single sphere. The atoms inside and outside the sphere are the same, but I have labeled them to follow their drift, and the two regions have different densities.

units lj
atom_style atomic
boundary p p p

lattice sc 0.9
region theregion block 0 20 0 20 0 20
region regionin sphere 10 10 10 7
region regionout sphere 10 10 10 7 side out
region regionsection block 0 20 0 20 9 11
create_box 2 theregion
create_atoms 1 region regionin ratio 1.0 23423
create_atoms 2 region regionout ratio 0.8 23049
mass 1 1.0
mass 2 1.0

please note that you are running with 16 MPI threads (even though you have only 8 cores) and 16 OpenMP threads. That is a total of 256 concurrent calculation threads.
does the segfault happen also when using only 1 MPI rank and 1 OpenMP thread. or with 8 MPI and 1 OpenMP threads?
the segmentation fault seems to be related to the thread library, so it may be possible that you are running out of stack space.

axel.

Just tried various combinations. If I run mpiexec with -np set to anything but 1 and OMP_NUM_THREADS unset, I get a seg fault. It runs (and so does the 2-sphere case) with OMP_NUM_THREADS set to 1 or to 16.

[As a side question: I’m not used to running with MPI except LAMMPS. In earlier correspondence I had the impression that OpenMP was a poor substitute for MPI, but in this case I apparently can’t request more than 1 MPI rank.]

This appears to be an issue of your local environment or how you link/install LAMMPS.
I cannot reproduce it with the latest patch release (24 August 2020, just released today) on my development machine.

I recommend you recompile LAMMPS with debug information included and then try to obtain more meaningful stack traces.

for inputs like you provided, using parallel computing isn’t going to make much of a difference in the first place. using more tasks than physical cores is rarely giving you much of an advantage and may often result in additional overhead that will slow you down.
you won’t be taking advantage of OpenMP with the quoted command line, even if you have it included in the executable. thus launching as many threads as you are doing is just a waste.

axel.