I agree with Paul that porting all of LAMMPS would be near impossible. Porting parts is possible, but the potential speedups will be limited. For this reason and others, our work on GPU MD started from the ground up. But, we designed it in the spirit of LAMMPS so that all the configurability of LAMMPS could be built into our code. Using a single 8800 GTX graphics card, we have achieved the performance equivalent to LAMMPS running ~32 Opteron processor cores on a fast cluster with infiniband (http://andrew.ait.iastate.edu/HPC/lightning/description.html).
I've got a Telsa D870 sitting on my desk right now and have plans to split the computations across its two GPUs to double that performance Using 4 GPUs in a single workstation is also a possibility.
For those that are interested in our work, our code is available open source: http://www.ameslab.gov/hoomd The current version is mostly a demo, but we have a very solid architecture in place. With a modest amount of work (I expect it complete ~June-July), HOOMD will have a full fledged scripting system modeled after LAMMPS's and our software will be capable of performing any Lennard-Jones type particle simulation that LAMMPS can. Other short range force fields are easy to add, and we are thinking about how to implement electrostatics on the GPU.
We have published a paper on our work titled "General purpose molecular dynamics simulations fully implemented on graphics processing units" in the Journal of Computational Physics. Here is the DOI link: http://dx.doi.org/10.1016/j.jcp.2008.01.047
I'm happy to answer any questions that anyone might have.