[lammps-users] Subsequent simulations in multiple PKA cascade mode

Hi everyone,

I am trying to run multiple simulations in the same box. More specifically, I’d need to set the box, chose randomly an atom as PKA and start a cascade, store results, chose randomly another PKA, run a cascade etc. etc.

I tried with a Python script (it just creates a text in. file with a normal input in it, runs the simulations, write_data on a data i-th file and runs the following simulation reading the i-th data file). But I guess there is a smarter way. Indeed, in this way if I have to run 1000 subsequent cascades on supercomputers it runs 1000 different simulations which means that at every new simulations I get back in the line behind other supercomputer users, for 1000 times. So, I’d like to do it as it was just 1 big simulation of 1000 cascades that stores results every time 1 cascade is finished and the following has to start.

I saw on the manual that the restart, write_restart, read_restart commands could be helpful but after having read the documentation I still don’t get how to apply them in this case.

Thank you.

Cheers,

Stefano

three things:

  1. you don’t need a python script to have a loop over multiple simulations. LAMMPS supports loops in its scripting language. you can bundle multiple different simulations into the same input by using the clear command, which will reset pretty much everything in LAMMPS with the notable exception of variables.
  2. nothing stops you from bundling multiple simulations into a single job submission on the supercomputer. details of how to realize it depends on whether you need to run each job in parallel (and the parallelization library/feature) and the batch system.
  3. but even better is that you can also use LAMMPS in multi-partition mode and with that parallelize the native LAMMPS loops of 1) over multiple partitions with the -partition flag and using a uloop variable. this allows you to choose in detail how parallel the individual runs are.

as for restarting, you can run an initial simulation to equilibrate your system to the point where you want to start the cascade, write out a restart file and then start all simulations from that restart.
however, you would get better statistical sampling, if you would randomize not just the PKA, but also the initial conditions. this can be done by taking your initial equilibration, then use displace_atoms with a suitably large random displacement of atoms and a (shorter) re-equilibration. in general, you have to watch out that most of your random seeds are chosen differently, which can be achieved by computing them from some node and loop iteration specific property.

axel.

Dear Axel,

Thank you for the suggestion, I am now using the “variable loop – jump SELF” algorithm.

I still have a question about your last suggestion: using displace_atoms every time don’t I loose the microstructural defects I got in the previous cascades?

you have to provide more details of your workflow to give a specific answer to that. i was assuming that you would want to do multiple cascade simulations starting from the same “pristine” restart and not a sequence of cascades, each following the previous. in my assumed workflow, you will get better statistical sampling from doing a (small!) random displacement and a (short) re-requilibration. if you don’t do this there is an increased risk that you are seeing patterns emerging from your simulations that are only specific to the snapshot you picked but not for the material.
if, on the other hand, you want to do multiple subsequent cascades on the same system, then it would indeed by less desirable to perform the “decorrelation”, but it should not do harm, if the displacement is not too large.

key is to avoid correlations and thus misinterpretation of the results from the choice of your initial equilibrated geometry.

axel.

You can use a loop to do something like this:

build or read_data for the system

loop:
insert a single atom with high velocity
or pick a single atom and give it high velocity
run dynamics to evolve the cascade

Steve

Dear Axel,

Thank you for the suggestion, I am now using the “variable loop – jump SELF” algorithm.

I still have a question about your last suggestion: using displace_atoms every time don’t I loose the microstructural defects I got in the previous cascades?

Thank you.

Cheers,

Stefano

Thank you, Axel.

The idea is to run multiple cascades to see the cumulative damage. I most likely will run independently 10-12 times this type of algorithm in order to improve the overall statistics.

You’re suggesting that before every new cascade I just randomly displace a little (say 0.1-0.5 times the lattice constant) every atom and then run a requilibration (like a fix npt or a minimization)?

yes. at the beginning of each sequence. that will improve the statistical relevance of your data over only randomly picking PKAs. otherwise you will have more strongly correlated results.

I’ve got just another question. I am running cascades picking random atoms and giving them a random direction in boxes with p p p boundary conditions. Is it a good way to go or it would be better to force cascades to fully evolve within the box (hence moving each pka in the box center direction) and maybe apply some Nosé-Hoover thermostat to the boundaries in order to absorb the collision energy? I mean, which way should be more close to a generic reality in the case of multiple collisions? Thank you.

that is too specific a question for me to give a qualified answer. The details depend on the specifics of what kind of situation you want to model and I don’t have sufficient practical experience in that area of research. What I can tell you is based on general statistical mechanical principles. Thus it is probably best to survey the published literature for similar studies and see what people have established as best practices, or if that is not sufficient contact and consult somebody with specific expertise in that area of research. This is also a topic beyond the scope of this mailing list, anyway, as it is independent of the MD code in use.

Axel.

Thank you, Axel.

The idea is to run multiple cascades to see the cumulative damage. I most likely will run independently 10-12 times this type of algorithm in order to improve the overall statistics.

You’re suggesting that before every new cascade I just randomly displace a little (say 0.1-0.5 times the lattice constant) every atom and then run a requilibration (like a fix npt or a minimization)?

I’ve got just another question. I am running cascades picking random atoms and giving them a random direction in boxes with p p p boundary conditions. Is it a good way to go or it would be better to force cascades to fully evolve within the box (hence moving each pka in the box center direction) and maybe apply some Nosé-Hoover thermostat to the boundaries in order to absorb the collision energy? I mean, which way should be more close to a generic reality in the case of multiple collisions? Thank you.

Cheers,

Stefano