[lammps-users] Running LAMMPS in a GPU


Has anyone ported/ran LAMMPS in a GPU? I've heard that GPUs are much
faster than CPUs because it is designed to handle more threads. NVIDIA
also has a c-code like prog language that they call CUDA. From CUDA's
website we can see lots of papers of MD simulations in a GPU.


Hi Jan-Michael

Interesting Question. In fact I started working with CUDA some weeks ago (right now for parameterfitting of MD-potentials). I am thinking of trying to port lammps via CUDA as soon as this is running. There are several problems though. While it is easy enough to do the force-calculation on the gpu while keeping the rest of LAMMPS as it is, the memory bandwidth does not allow to transfer the position and velocity data each timestep to the cpu. Therefore there would be some problems with most of the ''fix''es in LAMMPS. But I am quit sure that this can be solved. If there are others who think about such modifications, maybe we could discuss some ideas, and exchange our experience.

Best regards

-------- Original-Nachricht --------


We here at Sandia have also thought a bit about doing MD on GPUs. To
my knowledge, no one has ported LAMMPS to run on GPUs --- I think it
would be difficult if not impossible. Porting portions of LAMMPS, on
the other hand, would be a more reasonable thing to do. And there may
be enough interest among LAMMPS users to make a joint attempt at such
a thing.

There's a group at Iowa State that has worked a considerable amount on
doing MD on GPUs and comparing vs LAMMPS. I'll CC Josh Anderson, who
has done most of this work at Iowa State. Josh may wish to comment
further and/or point us to his work on the topic.


Classification: UNCLASSIFIED
Caveats: NONE


At the U.S. Army Research Lab we have a project looking at using GPUs and other architectures for high performance computing. As part of that project I have ported the Lennard-Jones potential from LAMMPS to CUDA. We are currently working on porting other portions of LAMMPS to obtain better performance.

If a working group of sorts is put together for LAMMPS on GPUs we would be interested in this.

Brian J. Henz
U.S. Army Research Laboratory
Advanced Computing and Computational Sciences Division
APG, MD 21005
Phone: 410-278-6531
Fax: 410-278-4983
email: [email protected]

I agree with Paul that porting all of LAMMPS would be near impossible. Porting parts is possible, but the potential speedups will be limited. For this reason and others, our work on GPU MD started from the ground up. But, we designed it in the spirit of LAMMPS so that all the configurability of LAMMPS could be built into our code. Using a single 8800 GTX graphics card, we have achieved the performance equivalent to LAMMPS running ~32 Opteron processor cores on a fast cluster with infiniband (http://andrew.ait.iastate.edu/HPC/lightning/description.html).

I've got a Telsa D870 sitting on my desk right now and have plans to split the computations across its two GPUs to double that performance :slight_smile: Using 4 GPUs in a single workstation is also a possibility.

For those that are interested in our work, our code is available open source: http://www.ameslab.gov/hoomd The current version is mostly a demo, but we have a very solid architecture in place. With a modest amount of work (I expect it complete ~June-July), HOOMD will have a full fledged scripting system modeled after LAMMPS's and our software will be capable of performing any Lennard-Jones type particle simulation that LAMMPS can. Other short range force fields are easy to add, and we are thinking about how to implement electrostatics on the GPU.

We have published a paper on our work titled "General purpose molecular dynamics simulations fully implemented on graphics processing units" in the Journal of Computational Physics. Here is the DOI link: http://dx.doi.org/10.1016/j.jcp.2008.01.047

I'm happy to answer any questions that anyone might have.