I am trying to find how the dataset in this article was generated. There is a description in the article on the methods used but I’m not a skilled LAMMPS user.
The system is a simulation of 4096 atomic particles (two kinds of particles) in a cube of side length L with periodic boundary conditions with typical Kob Andersen configuration parameters. The articles says:
“To obtain an equilibrated system at the desired temperature and pressure, we perform several stages of simulations to safely reach a desired state point. First, we equilibrate the system at a starting high-temperature state point (TS = 0.5495, PS = 0.4888) using an isothermal-isobaric ensemble. We then perform a gradual quench to the desired state point (T_target, P_target) using the same method, followed by one final run at the target state point to help preserve the pressure and temperature. We then switch to a canonical ensemble to further stabilize the temperature and pressure and obtain an equilibrated system. We run each part of this equilibration process for at least ten relaxation times.”
I am starting the simulation with random particles whose velocties are distributed like a Maxwell Boltzmann with the command
region my_region block 0 9.4 0 9.4 0 9.4
create_box 2 my_region
create_atoms 1 random 3277 229609 my_region
create_atoms 2 random 819 691203 my_region
mass 1 1
mass 2 1
velocity all create 0.5495 92561 dist gaussian
Is it right to define the box volume for the starting configuration if I first run an npt (Isothermal Isobaric ensamble) in which the volume can change?
Why the article says: “We simulate a system of N = 4,096 particles in a cube of side length L with periodic boundary conditions at various temperatures and pressures.” if the volume during the npt simulations can change? Am I missing something?
To run the npt simulation I’m using the command
fix nose all npt temp 0.5495 0.5495 $(100.0*dt) iso 0.4888 0.4888 $(100.0*dt)
I’m using Lennard Jones units, i don’t know if that is right.
I I understand correctly, the simulation in the article were performed by running a series of npt simulation at different temperatures and then an nvt (canonical ensemble). How can i feed the final data from a simulation to another? Should I use the write_restart command?
Is there a way to estimate the relaxion time of the systems?
Can I post this same message to the LAMMPS mailing list?
Please note that most of your questions are not really questions about LAMMPS but rather about doing MD simulations in general and not specific to LAMMPS. I will only respond to non-LAMMPS questions in a general manner without going into details. For more detailed discussions you should post in a different category in this forum and (even better) consult with a more experienced person.
You do need to fill some volume with particles, so you need to start with an initial volume. Are you sure the paper started with particles in random positions? And not with a regular lattice of type A atoms and just replacing a random selection of them with type B atoms to get the mixture? Details should be given in the publication or obtained from the authors. It is not a good idea to start a simulation with fix npt when you start with a mix of random positions as this would produce an initial system with very high potential energy due to overlapping particles and may not run in a stable fashion, if the initial volume is too small or some atoms are too close (by chance). So it may be better to follow some “unoverlap” protocol (search the web or the LAMMPS mailing list archives for discussions on how to unoverlap an initial geometry).
The correct syntax and some recommendations are in the LAMMPS manual. To reproduce the publication you will have to the information the authors provide (or contact them about the missing details). Check out the supplemental information.
You can use the fix npt command in an input, then run some simulation for a certain number of steps with the run command, the you can delete the fix npt integrator with the unfix command and create a fix nvt instead and issue another run command to continue the simulation with a different integrator setting and thus a different ensemble setting. Using write_restart to save the intermediate state of a simulation between different runs is best practice in case there is a problem continuing the simulation or a typo. Then you don’t have to redo everything but can continue from the restart point.
Not really. If you follow the protocol of a publication, the necessary information should be given in the paper or in its supplemental information. Otherwise you need to contact the authors.
Technically, yes, but since you posted here, that would be redundant and not likely resulting in additional insight. I for one will only respond to repeated questions once.
minimize 1.0e-4 1.0e-6 1000 1000 and it seems to work.
I don’t know if it is enough to run it just once, but I think it is the case.
Thanks, I think I will do something like
fix 1 npt temp_1
fix 2 npt temp_2
In order to get the desired temperature, trying to replicate the procedure described in the article.
How can I easily check the Temperature, Pressure and density (number of particles divided by volume (L^3) of the box) of the system at the end of the simulation? Should I get those information from the log file?
That will work for most cases to remove the worst close contacts. Some potentials, however, will not work well with very close pairs of atoms and can produce invalid forces (NaN) and then it will not work.
If you create your initial system with the expected/intended density, you should first run with fix nvt since minimization will drop you into a local minimum, which can still be a large way away from the equilibrium. The problem with fix npt is that it may not be as stable and that you may incur significant expansion at the beginning that will take a very long time to undo afterwards. It is much “easier” to expand than to compress.
After a series of fixes, I’m saving my configuration in a plain text file with the command
`...fix 5 all npt temp 0.56 0.56 $(100.0*dt) iso 0.17 0.17 $(100.0*dt)
The command is rightly saving position, velocities and particle types.
Is there a way to make the command order the particles on the file based on their ID?
I found that the dump_modify can print in order with, for example dump_modify mydump sort id
Is there a way to do that with the write_data command?
I have my particles’ positions and velocities on this file. Can I make another lammps script use only positions from the file (with the read_data command) but discard the velocities (say that I want to randomly generate them with the command velocity all create 0.5495 92561 dist gaussian)?
Would it be enough to run the velocity all create 0.5495 92561 dist gaussian after the read_data command to make lammps discard the velocities read from the file?
Please note, that it is best practice in forums (and mailing lists) to ask separate questions in separate threads with their own subject line so that others, that have the same questions, can more easily find the discussions and (hopefully) the answers they are looking for.
No. read_data does not support sorting of the content of sections, but that can be easily done after the fact with a smart text editor (pipe the block through “sort -n”) or smart scripting. Unlike trajectory files, data files are seen mostly as an internal storage format.
Yes. As the documentation for the velocity command states, it will assign new velocities according to the selected options.