No response after read_data

Hello everyone,
Recently I updated my manjaro working system and restarted it. And I found that when I was tring to run lammps with mpi, it just got no response after read_data. It’s like,

LAMMPS (23 Jun 2022)
Reading data file ...
  orthogonal box = (3.9775 3.9775 -2) to (44.8775 44.8775 62)
  2 by 2 by 4 MPI processor grid
  reading atoms ...
  2234 atoms
  read_data CPU = 0.095 seconds

Then I recompiled lammps and it didnt help. What should I do now?

Does this happen for all data files or just this one?
Can you try some of the examples bundled with LAMMPS and report back?

This is a blind guess, but since you said “… when I was trying to run lammps with mpi”, maybe you implicitly meant “it is fine if I run lammps without mpi”. And since the problem does not gone by recompiling lammps, I think it’s possible that the system update breaks the MPI installation. I may first check if the mpirun can correctly execute a simple MPI program, and whether versions of mpirun and the MPI library match. The next step may be installing another MPI separately and see if lammps can run with that.

Ignore what I said if the problem persists when run in serial.

Does the program hang, or die? Sometimes a segmentation fault can kill a program without any output (need to use gdb to track it down), but I think that is for serial not parallel. A hang is more common in parallel. Like others said, need more info.

It happens for all my scripts. I run lammps as,

mpirun -np 16 --mca opal_warn_on_missing_libcuda 0 lmp_mpi -nocite -in

And I checked with top command,

It has been running for over a night, but I got nothing.

I reinstalled the whole lammps directory, and the problem is still the same. Please give me any advice to get lammps work again.The last thing I want to do is reinstall the whole manjaro system…
Thanks.

It is hanging there without any output. If I run it in serial, everything just works fine.

The problem is gone when running in serial. Then I reinstall the whole lammps, but the problem is still there…
If the system MPI is broken, then I will try to build my own MPI library to test.

The problem seems to be relevant with system MPI. I fixed it by compiling lammps with my own MPI library. But I dont know why I cant fix it by recompiling lammps with system MPI.

The initial straightforward suspicion is that your system’s “default” MPI is broken. All the best working out why!

Please try adding --mca btl vader,self to your mpirun command.