Runing hiphive on supercomputer

Dear Developers,

I am trying to run hiphive for a mono-clinic cell in p1 symmetry. I am using 50 configurations, however process of fitting in hiphive in very slow. even smaller cutoff distance and simple method like “least_square” takes a quite long (~25 hours)… I am worried that running this with realistic cutoff may be impossible within the 72 hours walltime limit we have.

Is this expected behavior? Are there some ways to make it faster. Your input will be very helpful.

I am using the most recent version of hiphive.

I am using the following batch script.


`#!/bin/bash -l
#PBS -N log-hiphive
#PBS -l nodes=1:ppn=128
#PBS -l walltime=34:00:00
export OMP_NUM_THREADS=4

cd $PBS_O_WORKDIR
python run-fcp.py`

What is the shape of your fit matrix? Sounds like you either have a lot of data or a lot of free parameters or both.

Thanks for the quick reply.

from structure container A and F matrix are:(27864, 11082) (27864,)

and Total number of degrees of freedom: 11082

That’s a lot of parameters. Is it a higher order model? If that’s the case try reducing cutoffs for higher orders and maybe limit the expansion to two or three body only. Which system is it?

Thanks. This is monoclinic bilayer ReS2… Its basically has no symmetry (p1).
Currently I am only trying fitting upto 3rd order.

As you suggested, I will play with number of configurations and cutofff to make it work.