Hello,
I’m running a simulation using “fix deposit,” but I’d like to find an optimal deposition location. That is, rather than depositing in a random space throughout a region, I’d like to deposit in the location which gives the lowest energy configuration.
One way I figured that I could do this is by starting with a given configuration, running ${set_parallelruns} parallel simulations starting from that given configuration in each of which 1 particle is deposited and then run for a bit and then minimized, selecting the parallel simulation which resulted in the lowest energy, and repeating for the next deposition.
This is somewhat similar to the “prd” command, except that there is no “event” which chooses which of the {set_parallelpruns} simulations all the rest should be synced to, but rather all {set_parallelpruns} simulations run for a given amount of time and they all then sync to the one with the lowest energy.
I can do this using 1 partition with the following code:
write_dump all custom traj.restart.0 id type x y z vx vy vz ix iy iz
variable outer_loop_counter loop {set_depparts}
label outer_loop
if "{outer_loop_counter} > 1" then &
“variable min_energy delete”
variable min_energy equal 1.0E20
variable inner_loop_counter loop {set_parallelruns}
label inner_loop
read_dump traj.restart.(v_outer_loop_counter-1) (v_set_deptime*v_outer_loop_counter) x y z vx vy vz ix iy iz replace no purge yes add yes
fix 1 all langevin 1000 1000 100 65348
fix 2 all deposit 1 1 1 65348 region insertionspace near 3.0 attempt 100
run {set_deptime}
unfix 1
unfix 2
minimize 1.0e-4 0.0 10000 10000
if “{pe} < {min_energy}” then &
“variable min_energy delete” &
"variable min_energy equal {pe}"
write_dump all custom traj.restart.{outer_loop_counter} id type x y z vx vy vz ix iy iz
next inner_loop_counter
jump in.run inner_loop
next outer_loop_counter
jump in.run outer_loop
I’d now like to parallelize the inner loop over {set_parallelruns} processors since doing so gives me 100% parallel efficiency, whereas doing each inner loop on {set_parallelruns} processors gives me much lower parallel efficiency. Naively, I’d just change the variable inner_loop_counter from a “loop” type variable to a “uloop” type variable and keep things as is, invoking LAMMPS with “-partition ${set_parallelruns}x1”. However, the different inner loop partitions would finish at different times, so the first partition that stops would begin the outer loop using a different restart file than a later partition. Thus, I need to make the partitions wait for each other to finish the inner loop before going to the next outer loop cycle.
Does anyone know a way to do this? It’d be nice if there’d be a native LAMMPS command I can use, since the only solution I can think of is to use the LAMMPS library interface and do the syncing in my own code.