fix gcmc not supporting full_energy with molecules and MPI. Where does the problem lie ?

Hi Anders,

maybe my answer above was a little bit to strict with
to little information.

Question here is: can one get physically correct results with only
insert/removals and translations? If so, using mpi or not shouldn’t
change the statistics of the results at all?

Of course you are right, there are plenty ways to speed up
Monte Carlo simulations with parallelization. What I was objecting
was the idea that this would be compatible somehow with a MD-like
simultaneous force/position update for the whole system at once.

What always can be done is distribution of the workload of one
step (distance calculation, energy calculation etc.) by means
of OpenMP.

Gedankenexperiment: you simulate a somehow tightly packed sphere
system with 1000 spheres on 1000 threads. The 50% removal-threads
would compute the acceptance probability of a removal, which should
be, depending on the potential strength (total V positive), in
the 99% range for removal-acceptance for the metropolis test in
each thread - because each thread "sees" the surroundings of his
particles as "tightly packed". After the next step, the density
of the system would be at about 50% compared to the former step,
which is possibly not a correct physical phase-space trajectory
step for this system.
But: now the real problems will start. In the next step, of the 50%
of threads that throw the "insertion" dice, any single thread will
see a lot of holes left by the erroneous removal and possibly put
new atoms there without any chance to know what the other insertion
threads are up to. The removal threads in this step will continue
to blow out atoms because any existing atom is still tightly packed.
And so on. This would be a wildly-fluctuating phase-space trajectory
without any meaningful equilibrium convergence, IMHO.

I didn’t quite understand which part you said was problematic here.
Could you elaborate?

Some years ago, J.A.Anderson, the author of HOOMD, published an approach
for an elaborate parallelization scheme with volume subdivision and
independent cell updates: doi.org/10.1016/j.jcp.2013.07.023
which also contains some thoughts and warnings for some pitfalls.

Thanks & Regards

M.