Estimation on computational capacity

Hi everybody

Could anybody give an estimation on required CPU and GPU capacity for 100 ns running a system comprised of about 100,000 particles (lj potential) ? My running deadline is one week.

Please let me know your experience on any other huge system.

Any hint is welcome.

Sincerely

Hi everybody

Could anybody give an estimation on required CPU and GPU ca

​impossible to say at such a general level. you *have* to do some
benchmarks. if only to figure out the most effective choice for your system.

resource requirements do not only depend ​on the number of particles and
choice of potential, but also on:
- density / geometry of the system / load (im)balance
- choice of cutoff
- mass / timestep
- other computations/features used (e.g. rdf or coordination number
comptations, complex scripting etc.)
- CPU hardware and general per-node performance
- network hardware and configuration setup
- number of nodes per job, number of cores per node, choice of MPI vs.
threading
- kind of GPU hardware, clock, memory speed (on GPU and CPU), bus
connecting the GPU and how many bus lanes per GPU are available and
available for exclusive or shared use.

pacity for *100 ns* running a system comprised of about *100,000
particles* (lj potential) ? *My running deadline is one week. *

Please let me know your experience on any other huge system.

​100,000​ LJ particles is not "huge" for LAMMPS. LAMMPS can handle (if
compiled with suitable options and run across a sufficiently large number
of MPI ranks), many billions of particles.

some illustrative numbers for LAMMPS scaling capabilities are shown here:
http://lammps.sandia.gov/bench.html
mind you, most of those numbers are for old hardware and specific inputs.

​axel.​