Hello people,
Has anyone run gpu-accelerated lammps calculations, in particular with the eam interaction model, on the cylindrical Mac Pro?
Our group needs to buy some hardware for visualisation purposes and eam/gpu number crunching. For the former a Mac Pro would be a good though somewhat expensive machine. If the number crunching can be done on the same hardware (Mac Pros have two fairly good AMD cards in them), that would make it a very good solution that kills two birds with one stone. Would anyone have some data on lammps eam/gpu performance on the cilindrical Mac Pro? The system we'd like to run would be fairly large, millions to lower dozens of millions of atoms.
One drawback of the Mac Pro would be that the AMD cards in it won't do cuda. Would anyone have some performance data on how eam/gpu stacks up against eam/cuda for large systems?
greets,
Peter
One drawback of the Mac Pro would be that the AMD cards in it won’t do cuda. Would anyone have >some performance data on how eam/gpu stacks up against eam/cuda for large systems?
The benchmark page on the LAMMPS web site has a section “GPU and USER-CUDA bench …”
which has plots for the EAM benchmark with both packages.
Steve
The gpu package site http://lammps.sandia.gov/doc/accelerate_gpu.html
says:
"To use this package, you currently need to have an
NVIDIA GPU and install the NVIDIA Cuda software on
your system"
Is this no longer valid?
M.
There is a Makefile for OpenCL, which seems to imply that it should also work for AMD GPUs, but I have never tried to actually compile it for anything but Nvidia so I am not sure.
There is a Makefile for OpenCL, which seems to imply that it should also
work for AMD GPUs, but I have never tried to actually compile it for
anything but Nvidia so I am not sure.
yes, you can compile the GPU package for OpenCL and that works for
both Nvidia and AMD GPUs as well as xeon phi and supported cpus via
the intel OpenCL runtime.
however, performance of these various options are a mixed bag. that is
due to two reasons: there are differences in architecture that would
require a somewhat different data model to achieve the best
performance (this is something the KOKKOS library and package try to
address); also, the quality of the underlying driver has an impact.
i several years back AMD had donated some GPUs to us and we used the
to debug, test and tune the GPU package. it worked particularly well
for double precision, since the AMD hardware has in general a *much*
better support for double precision operations. however, the AMD GPU
drivers are not as well streamlined and optimized as the corresponding
AMD drivers, which showed in particular when oversubscribing GPUs,
which is how you can squeeze even more performance out of the GPU
packages.
however, the general experience with GPU support in other software
packages is that it is highly recommended to stick with a linux
environment. macos x often has additional quirks and performance
issues. recently, the yosemite version broke a lot of GPU accelerated
software in multiple ways. a straightforward linux machine will save
you money and trouble.
the best way to go about it is to request test hardware and compare
prices and capabilities. if you want to use double precision (highly
recommended for very large systems), then AMD hardware is worth to be
considered seriously. please keep in mind that the major memory
consumer in LAMMPS are the neighbor lists and constructing them on
GPUs is even more demanding, so you will have to make tests with
proper system sized to ensure you have enough GPU ram.
axel.