Dear Lammps Users,
Recently, I faced a problem in running Lammps on the cluster server. I highly appreciate it if you could take a look at my problem explained in the following and let me know how can fix it.
The thing is I have a Lammps script (you can find it in the attachment), and when I run it on the cluster server, as you can see in the following, Lammps does not use the whole CPU capacity on the node, and this issue makes my simulation takes much more time than a normal case.
Performance: 0.587 ns/day, 40.911 hours/ns, 6.790 timesteps/s
47.8% CPU use with 16 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
Input_lmp.in (9.61 KB)