[lammps-users] Visualization using Paraview or VisIT

Having a number of lammps users on our campus and ample compute resources to generate large data sets, we are now bumping up against our next obstacle which is the ability to visualize these large (10 million - 50 million atom) data sets. We have looked at both Paraview and VisIT, but there doesn’t seem to be a simple way to read the lamps output data into these apps. Our interest in these two packages in particular is that they can run in parallel across multiple machines utilizing both cpu and memory which in turn would allow us to more effectively manipulate the data sets in graphical form (aka dynamic cutting planes, etc…).

It seems as if a vtk output option might be the best option, but wanted to poll the user list to see if anyone is in fact using either of these packages with any success to view large scale atomistic models.

Kind regards,

LAMMPS can write one dump file per processor (see the dump
command), so if you want multiple files as input to parallel
viz, that's a start. You'll likely have to do conversion, but that
is typically a post-processing task. The Pizza.py toolkit has
converters to VTK format for LAMMPS dump files.

Steve

Conversion of any code to parallel takes a few weeks, perhaps longer.
-- Ed Barsis

PS: that's #27 on my quote list (www.sandia.gov/~sjplimp/quotes.html)
- he used to be my boss.

hi jeff,

Having a number of lammps users on our campus and ample compute
resources to generate large data sets, we are now bumping up against
our next obstacle which is the ability to visualize these large (10
million - 50 million atom) data sets. We have looked at both Paraview
and VisIT, but there doesn't seem to be a simple way to read the lamps

<shameless plug>
have you looked at VMD?
http://www.ks.uiuc.edu/Research/vmd/

its ability to handle large systems is for the most part limited
only by main memory. the memory requirements are quite modest.
for the initial system information you would need approximately
40 bytes per atom, which would turn into about 2GB for a 50 millon
atom system. each additional trajectory frame would come at the
cost of another 600MB.
</shameless plug>

output data into these apps. Our interest in these two packages in
particular is that they can run in parallel across multiple machines
utilizing both cpu and memory which in turn would allow us to more
effectively manipulate the data sets in graphical form (aka dynamic
cutting planes, etc...).

i personally prefer not to have to deal with hooking up multiple
machines to do something interactive. this can easily turn into
a big waste of resources.

It seems as if a vtk output option might be the best option, but
wanted to poll the user list to see if anyone is in fact using either
of these packages with any success to view large scale atomistic
models.

i have given this some thought, since the research in our group is
headed towards similar system sizes, if not larger, and my feeling
is that rather than jumping through enormous hoops to handle larger
and larger data sets (how to store them? how to read them efficiently?)
i am beginning to wonder, whether it would not be better to start
condensing down the information about the system into something
more manageable. for example, do a coarse graining before storing,
or convert individual positions into a density grid or something
more sophisticated. at the rate of millions of atoms, you don't
really need the atomic detail except for (small) subsets and
also often don't want high frequency motions (filtering those out,
could give *much* better compressible trajectories).

any comments?
if there is somebody out there, that would like to discuss these
issues and has similar (practical) needs, i'd be happy to keep
discussing this off-list or in a more suitable forum.

cheers,
   axel.