I read some about creating data file but I couldn’t clearly get how to
generate thousands of atoms coordinates. For example I am trying to model
water-copper nanofluid and I modified a data file that I’ve found in here to
study on it. That was the data file of gold-water nanofluid, it looks very
succesfull on VMD, so I changed the mass and the potential of gold particles
leading to a copper model; however, that data file contains about 8000 atoms
which is huge for preliminary analysis causing very long runtime and I want
8000 atoms is huge? hmm...
what should i call the simulations that i did a year ago with over
300,000 atoms then?
to obtain a smaller configuration as data file. What is the best and the
most practical way to obtain a data file with desired coordinates dealing
with the positions of all the atoms. Thank you for your attention.
this has been discussed many times and there also is a section in the
documentation.
you can either write your own tool (script or program) to generate it,
or use some external builder tool and then convert the resulting file
into a data file. there are different ways to do that, too. with
different levels of easy and sophistication. it would take too much
time and space to discuss them all here and also, the best choice
depends a lot on the model and the choice of potentials and your
ability to do scripting/programming.
Indeed, this question comes up a lot. I never tire of replying to
it because it gives me a chance to brag about my own program
(moltemplate), and call attention to others which look good.
Just to give a perspective of the minimum amount of ram memory need it to keep one frame having 1.01e10 atoms: if (x,y,z) coordinates are represented by using a single precision floating point number i.e 4 byte in memory (32 bits), Then I have (1.01e10)x3x4 = 1.212e11 bytes , now If 1 byte = 9.31e-10 GB , Then (1.212e11)(9.31e-10)x4x3 that equals 113 GB
If i recall correctly the largest integer you could represent using 64 bit signed integer would be -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, or from (2^63) to 2^63 − 1
(lots of atoms) =>
Mind you, coordinates are represented in double precision floating point. By the time you reach the limit of the number of atoms, you will have massive truncation errors. already with a billion atoms, you should see a significant difference in where you place the origin of your box. Axel.
Wow. This is not an issue I had considered. Definitely beyond my scope of knowledge, in terms of computer science.
Having said that, as with all simulations, experiments and reductions in complexity, it is important to know when a ‘difference’ matters. That is, is the physical process I am interrogating obscured by the difference which is manifested.
I’ll go and speak to computer scientists to get some advice.
I have been preparing my thesis(doctorate) usuing DM,I work in my
magister with one molecule and with other DM programme
this one i will use LAMMPS for the a mixture and not one molecule.
and I am a new user, I don't have ideas about it,
if everyone have documentation for a beginner in this system,send me
files or sites with pictures and video in this subject.
if you have other system to use in the silmulation of mixture please help me.
cordially..
Wow. This is not an issue I had considered. Definitely beyond my scope of
knowledge, in terms of computer science.
not sure if this is really a "computer science" thing. it is more a
topic that people in (high-performance) scientific computing worry
about. i've not met many computer scientists that ever worried about
floating point accuracy. more likely to find somebody worrying about
this in computational physics.
this story is pretty simple and scary. i suggest you first head over to:
with that in mind, you should realize that you have 52 bits (about 15
digits) precision for the mantissa. so you can fit about 4.5e15
numbers between 1.0 and 2.0. once you reach 2.0 the exponent is
incremented and now you can fit only half as many numbers between 2.0
and 3.0. and beyond 4.0 it is a quarter and so on. if you follow this
through, you'll see that with system that extends from 0 to 100
nanometers you lose about 10 bits (or 9 if you go from -50 to 50
nanometers since the sign bit is a separate bit). at a giant 100
micrometer atomic system your floating point precision at the
boundaries will have dropped to the level of single precision math.
and if you were using _single_ precision you had 3-4 bits left, i.e.
you would compute forces based on a resolution of 0.0625 angstrom.
when simulating such large systems, it would be better to use a
"shifted representation", i.e. represent all coordinates internall y
relative to an offset that is given by the location of its domain in
the total system.
Having said that, as with all simulations, experiments and reductions in
complexity, it is important to know when a 'difference' matters. That is, is
the physical process I am interrogating obscured by the difference which is
manifested.
I'll go and speak to computer scientists to get some advice.