Problem in running Lammps on cluster server

Dear Lammps Users,

Recently, I faced a problem in running Lammps on the cluster server. I highly appreciate it if you could take a look at my problem explained in the following and let me know how can fix it.

The thing is I have a Lammps script (you can find it in the attachment), and when I run it on the cluster server, as you can see in the following, Lammps does not use the whole CPU capacity on the node, and this issue makes my simulation takes much more time than a normal case.

“”"
Performance: 0.587 ns/day, 40.911 hours/ns, 6.790 timesteps/s
47.8% CPU use with 16 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total

Input_lmp.in (9.61 KB)

I think this is first and foremost something that you need to contact the user support of your cluster about.
The most likely scenario is that there are some other processes running on the same node that should not be there and that are consuming resources that would force your calculations to use swap.
Another possible explanation is that your are running across multiple nodes without a fully functional high-speed interconnect, as the time spent in “Comm” is unusually high.

I don’t see anything in your input that would be causing such a slowdown otherwise.

Axel.