Compiling GULP using cray compilers on Archer2

Dear Julian,

Hope you are well!
I recently compiled gulp 6.0 and 6.3 on Archer2 using the cray compiler.

Having successfully compiled the program, I then tested it on one core vs. one node and saw increased performance.

However, when trying to scale it to multiple nodes, I found that GULP showed diminished performance during Mott-Littleton defect calculations, e.g. 1 node costs 20 seconds but two nodes costs more than 50 seconds to complete. This is true for both versions of GULP.

Is this a known issue? Did i miss anything that needs to be changed in mkgulp to accommodate the mpi implementations on cray systems?

The relevant programming environment loaded to Archer2 was:
PrgEnv-gnu/8.3.3
cray-mpich/8.1.23
The mkgulp file was also slightly altered (adding -fallow-argument-mismatch ) for the parallel cray compilation scheme:
echo 'RUNF90=ftn -fallow-argument-mismatch ’ >> makefile

Happy to provide further evidence if needed!

Thank you in advance for your time and help.

Best wishes,
Cyril

Hi Cyril,
Sorry for the delayed reply. I think to comment properly it needs more information about your specific job. Obviously parallel scaling depends very much on the number of atoms involved and which tasks are dominating the computation. For example, large systems will be dominated by the cubic scaling Hessian inversion, in which case the parallel performance depends on the maths libraries you’ve linked against rather than GULP. For small systems, other things will dominate the cost, though as always parallel scaling beyond a single node is unlikely to be efficient as communication can dominate. So without more info, I think I’d describe this as a known feature of parallel computation (i.e. that you need to test the scaling performance for your system size and chose the optimum number of nodes). I’d also say that the parallel performance of Mott-Littleton may not be as good as standard calculations because of the more complex nature of the algorithm and the fact that it’s less widely used.
Regards,
Julian