job termination using REPLICA commands temper_grem

Hello LAMMPS developers, I have the following problem when run the example for temper_grem (inpust in the directory ~/lammps/examples/PACKAGES/grem/lj-temper/
when I run:
mpirun -np 4 ~/lammps/build/lmp -p 4x1 -in in.gREM-temper

LAMMPS (29 Aug 2024)
Running on 4 partitions of processors
Abort(1) on node 1 (rank 1 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1

I don’t know what happen because when I run the example of in.melt using 4 cpus there is not problem

mpirun -np 4 ~/lammps/build/lmp -in in.melt
LAMMPS (29 Aug 2024 - Update 1)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962
Created orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
1 by 2 by 2 MPI processor grid
Created 4000 atoms
using lattice units in orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
create_atoms CPU = 0.001 seconds
Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule
Neighbor list info …
update: every = 20 steps, delay = 0 steps, check = no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d
bin: standard
Setting up Verlet run …
Unit style : lj
Current step : 0
Time step : 0.005
Per MPI rank memory allocation (min/avg/max) = 2.706 | 2.706 | 2.706 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
50 1.6842865 -4.8082494 0 -2.2824513 5.5666131
100 1.6712577 -4.7875609 0 -2.281301 5.6613913
150 1.6444751 -4.7471034 0 -2.2810074 5.8614211
200 1.6471542 -4.7509053 0 -2.2807916 5.8805431
250 1.6645597 -4.7774327 0 -2.2812174 5.7526089
Loop time of 0.140831 on 4 procs for 250 steps with 4000 atoms

Performance: 766876.510 tau/day, 1775.177 timesteps/s, 7.101 Matom-step/s
99.0% CPU use with 4 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

Pair | 0.065507 | 0.093171 | 0.10357 | 5.3 | 66.16
Neigh | 0.013328 | 0.016726 | 0.018237 | 1.5 | 11.88
Comm | 0.01486 | 0.027464 | 0.059723 | 11.3 | 19.50
Output | 0.00021964 | 0.00025251 | 0.00031458 | 0.0 | 0.18
Modify | 0.0015173 | 0.0019865 | 0.0022271 | 0.6 | 1.41
Other | | 0.00123 | | | 0.87

Nlocal: 1000 ave 1008 max 987 min
Histogram: 1 0 0 0 0 0 1 0 1 1
Nghost: 2711.25 ave 2728 max 2693 min
Histogram: 1 0 0 0 0 2 0 0 0 1
Neighs: 37947 ave 38966 max 37338 min
Histogram: 1 1 0 1 0 0 0 0 0 1

Total # of neighbors = 151788
Ave neighs/atom = 37.947
Neighbor list builds = 12
Dangerous builds not checked
Total wall time: 0:00:00

and I build the lammps from stable version using the following commnad:
cmake -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCMAKE_Fortran_COMPILER=ifx -D PKG_KSPACE=yes -D PKG_REPLICA=yes …/cmake

I hope can you help me.

Thanks,
Samuel

Well, we don’t know what happened either, since you don’t provide us with the output in the screen.# files, specifically screen.1. You may have to use the -nb flag when running LAMMPS in case those files are empty.

I have no problem running this input deck on either the stable 29 Aug 2024 version or the current 4 Feb 2025 version.

Thanks for you suggestion, there is a extract package that need to be installing now works.

I’m sorry this misunderstanding.

This is a very cryptic statement. Can you elaborate a bit more so that others that may run into the same issue or search for solutions for similar problems can learn what they would need to do and how they can diagnose it.

Thanks in advance.

Of course, basically when a build the lammps I forget of packages MOLECULE and RIGID, and that inputs needs this packages.
After that works.

Thanks,
Samuel