Are there any published/reported performance numbers of LAMMPS on an NVIDIA H100 GPU? We are specifically interested in the expected performance increase of the H100 over an A100.
A100 data is available here for a variety of cases and GPU counts:
I don’t have any test results for this, but as a reference, the FLOPS of H100 (SXM) is ~3.4x of A100 and the memory bandwidth is ~2x (List of Nvidia graphics processing units - Wikipedia). So I think the theoretical speedup should be somewhere in between for a typical MD simulation, if the problem is large enough and there are no other bottlenecks (like data transfer between CPU and GPU, or cooling).