Dear support; I'm very happy to use lammps because, on my system, it scale
very well until 512 cpu. And extend lammps in order to include new feature
is very easy.
I'm interesting to try lammps on cuda device. There is sombody that had done
some kind of simulations on the graphics card?
porting MD codes, particularly existing ones to GPGPUs is
far from trivial. most GPGPUs have only single precision
floating point math and to use them efficiently, one basically
has to rewrite the MD code from scratch. there are a number
of efforts to implement MD codes in GPGPU (i should have a
couple or URLs and links to papers somewhere and can dig
them out and send them around if people are interested),
but significant speedups have so far only been achieved for
plain LJ interactions. on top of that one has to consider
the errors introduced by single precision math (which can
be quite considerable for any non-homogeneous system).