General queries on Lammps capabilities and benchmark study!

Dear Dr. Kohlmeyer,
As always, I was expecting a detailed response from you.

...and as (too) many people, you are forgetting to cc: the mailing
list on your reply.

I really appreciate your guidance and suggestion on this.
In line with my previous queries:
1. Based on your knowledge and experience on the users forum, could you
please guide me to the individual or group who attempted to do massively
large simulations?

why? you haven't answered my question about what use that information
would be for.

2. I do understand the limitation of the users ability run simulations in a
large cluster, the real guidelines I need is, if I were given a task of
running million atoms simulation with sufficient computational resources,
can I scale it?

please do your homework. as i mentioned, information about this has
been available on the LAMMPS home page for YEARS. millions of atoms is
ridiculously small in that context, it has long been possible to run
more than 2 billion particles with the lj/cut pair style. i did this
(just for kicks) on the then rank#1 machine of the top500 list using
about 2/3rds of its 100s of thousands of CPU cores (for 15mins, since
they were closing the machine for maintenance) scaling was just

3. Few references on the large simulations will be what I am looking from an
expert like you.

many people do simulations with millions of particles.

By the way, I went through the publication list, and most of the study fail
to list the scale of the simulations.

it is extremely easy to infer this information from knowing the model
and the density and the geometrical dimensions of a simulation.

again, please do your homework. and doing your homework means not just
looking for *exactly* the information that you want presented to you
on a silver platter, but also make an effort to reconstruct it in
obvious way from available calculations.