Getting MPI to work

Hi Axel,

This may be a basic question, but I’ve been having trouble getting the MPI enabled commands to work. I’m using the latest beta release, ATAT3_20. What I currently do:

  1. Edit /src/makefile, commenting out #CXXFLAGS=(PATCHCXXFLAGS) and uncommenting CXXFLAGS=(PATCHCXXFLAGS) -DSLOWENUMALGO. I do this because later in on this makefile it says "#make sure to include -DSLOWENUMALGO if compiling mpi version".

  2. I then save this makefile go back to /atat/

  3. My system doesn’t have mpiCC, so I change the makefile here to be "MPICXX=mpicxx -DATAT_MPI".

  4. I run ‘make’, and then ‘make mpi’. I can then run mpimmaps, and mpigenstr (i.e. the commands are available)

  5. I tested mpigenstr by running it on 1,2,4,8,16 cores for one hour, and outputting its results to a data file, str.out. I ran with -n 128, so that it wouldn’t likely be able to complete in that time.

  6. The results, however, show that the mpi has not been set up write. After all 5 jobs timed out after an hour I found that all 5 str.out files were the same length (in terms of line count), while, at least in my head, the 16 core run should have more lines than the 8 core run, etc. etc.

Thus, could you advise if I have done something wrong at some part of this process? I would greatly appreciate it. As a followup, does mpimmaps introduce parallelization just to "finding best structure" or also to fitting the cluster expansion?

Thank you very much,
Adam

Dear Team

I am new to ATAT and i got it installed. I am having difficulty in getting it to work with Slurm. I have a queuing system. I use the following job submission script
#!/bin/bash
#SBATCH --nodes=2 -p small
#SBATCH --ntasks-per-node=16
#SBATCH --job-name="PtNich2"

cd $SLURM_SUBMIT_DIR
module unload openmpi
module load impi gnu
env
##ls -l
#srun -n32 /home/workgroups/dft-u/vasp.5.4.1/vasp_std
CODE: SELECT ALL
maps -d &
pollmach runstruct_vasp srun -n32

#end

It finds best strctures and runs vasp jobs and completes them. But I do not know where "maps" is writing the output files like "clusters.out", "eci.out", "fit.out" "predstr.out" etc.

Can someone help me how i will direct all "maps" output file to the current directory from where i submit my job to the queue.

if i happen to run this interactively (meaning without submitting it to the queue) by typing separately (1) maps -d & and (2) pollmach runstruct_vasp & it works perfectly fine. Unfortunately, i cannot run any jobs interactively in our clusters.

I appreciate very much your help.

Thanking you

with regards
Sumathy

Sumathy: This is more a queuing system question. Sometimes, synchronization of node-local filesystems is not quick. Perhaps try the same script with commands other that ATAT’s.

ashaw: It may not be necessary to use MPI for structure enumeration. I have improved the efficiency of the algorithm recently (download latest beta version) and it’s much faster now.
You have to realize that, even with a faster enumeration, will you have enough disk space to save all structures? :wink: