Compilation of Exciting Fluorine(/Oxygen) with HDF5

Dear Exciting Developers,

I have been working through the tutorial for the BRIXS code but need help compiling the Exciting main code with HDF5.

Have I understood correctly? Compilation with HDF5 is a requirement to store the excitonic eigenvectors in .h5 format.

To build with HDF5 support, I have checked the preprocessor directive within the code: _HDF5_, and the include and link options for the parallel version of hdf5 I have available:

$ h5pfc -show
>> ftn -I/opt/cray/pe/hdf5-parallel/1.12.2.1/include -Wl,-rpath -Wl,/opt/cray/pe/hdf5-parallel/1.12.2.1/gnu/9.1/lib

I updated the make.inc accordingly with:

MPIF90_OPTS = -DMPI -D_HDF5_
MPI_LIBS = -I/opt/cray/pe/hdf5-parallel/1.12.2.1/include -Wl,-rpath -Wl,/opt/cray/pe/hdf5-parallel/1.12.2.1/gnu/9.1/lib

This leads to an error during compilation

../../src/src_gw/task_chi0_r.f90:236:77:

  236 |         &               chi0(1,1,iomstart,iq),(/matsizmax,matsizmax,iomstart:iomend/))
      |                                                                             1
Error: Syntax error in array constructor at (1)

that I have traced back to the following _HDF5_ block in /src/src_gw/task_chi0_r.f90 at the call on line 235:

230 #ifdef _HDF5_
231         write(cik,'(I4.4)') ik
232         path = "/qpoints/"//trim(adjustl(cik))
233         if (.not.hdf5_exist_group(fgwh5,"/qpoints",cik)) &
234         &  call hdf5_create_group(fgwh5,"/qpoints",cik)
235         call hdf5_write(fgwh5,path,"chi0", &
236         &               chi0(1,1,iomstart,iq),(/matsizmax,matsizmax,iomstart:iomend/))
237 #endif

Unfortunately, I cannot determine the error, or whether it can be mitigated with an additional compiler flag. So any help would be greatly appreciated.

As a new user to this forum I cannot upload attatchments, so I have created a GitHub repository with the output from the make command, and my make.inc file. I am compiling with GNU compilers version gcc/11.2.0.

Finally, as stated in the title of this topic, I was originally working with Exciting version Oxygen. I have also verified the same problem with compilation of Exciting version Fluorine.

Thank you very much, and best regards

Joshua Elliott

Research Scientist
Diamond Light Source
United Kingdom

Hi Joshua,

It’s nothing to do with compiler flags. When _HDF5_ is true, the block of code you shared is included in the source. And it looks like a bug to me.

This isn’t surprising, we don’t test with HDF5 in our CI and I doubt this has been touched for a few years.

(/matsizmax,matsizmax,iomstart:iomend/) is definitely invalid fortran, but I can’t actually work out what should be there from an initial inspection😅

Can you try commenting out lines 231 - 236 in your example? This is in GW and I doubt the RIXS code will use it. If it does, I’ll have to take a closer look.

Cheers,
Alex

Alex,

Thanks for your fast reply!

I understand, and commenting out this block of code has let me compile, I have also been able to generate the .h5 file required to continue with the RIXS simulation.

So that you are aware of it, I found one other instance in the gw part of the code where the same invalid FORTRAN syntax has been used

src/src_gw/task_epsilon.f90 (Line 157-164):
162 !        call hdf5_write(fgwh5,path,"epsilon", &
163 !        &               epsilon_(1,1,1,iq),(/matsizmax,matsizmax,1:freq%nomeg/))

Best regards

Josh

Awesome, glad to hear!

Thanks for the heads-up. There is some development to completely refactor the HDF5 outputting. At which point, I am sure we will catch many problems from the current usage in the GW and BSE modules.

Cheers,
Alex

Hello,

Sorry to necro the thread, but i have also had similar (albeit) different problem with compiling exciting.

my mpif90 in make.inc looks like:

 MPIF90 = mpif90
36 MPIF90_OPTS = -DMPI -D_HDF5_
37 MPI_LIBS = -I/share/pkg.7/hdf5/1.8.21/install/include -L/share/pkg.7/hdf5/1.8.21/install/lib /share/pkg.7/hdf5/1.8.21/install/lib/libhdf5hl_fortran.a /share/pkg.7/hdf5/1.8.21/install/lib/libhdf5_hl.a /share
   /pkg.7/hdf5/1.8.21/install/lib/libhdf5_fortran.a /share/pkg.7/hdf5/1.8.21/install/lib/libhdf5.a -lsz -lz -ldl -lm -Wl,-rpath -Wl,/share/pkg.7/hdf5/1.8.21/install/lib -lhdf5 

yet i get the compilation error:

mpif90  -O3 -march=native -ffree-line-length-0  -cpp -DXS -DISO -DLIBXC -DMPI -D_HDF5_ -Ifinclude   -c  ../../src/src_gw/mod_hdf5.f90
../../src/src_gw/mod_hdf5.f90:43:12:

         use hdf5
            1
Fatal Error: Can't open module file ‘hdf5.mod’ for reading at (1): No such file or directory
compilation terminated.

yes, hdf5.mod is present in the location as one would hope:

~/exciting/exciting% ls /share/pkg.7/hdf5/1.8.21/install/include/hdf5*
/share/pkg.7/hdf5/1.8.21/install/include/hdf5.h  /share/pkg.7/hdf5/1.8.21/install/include/hdf5_hl.h  /share/pkg.7/hdf5/1.8.21/install/include/hdf5.mod

(among other modules)

but its unclear as to why it is throwing this error, since i believe i am linking correctly to the hdf5.mod ?

Any insight? I am very interested in doing these RIXS calculations as well.

So this error i found was due to an older version of hdf5 which was compiled for an older version of gfortran.

I now have a version of hdf5 compiled with gfortran8.5
The openmpi and hdf5 versions are

 openmpi/4.1.5   2) hdf5/1.10.10```

but now i am getting an mpi_bcast error when trying to compile exciting with mpi and smp


```bash 

mpif90    -O3 -march=broadwell -mtune=intel -ffree-line-length-0  -cpp -DXS -DISO -DLIBXC -fopenmp -DUSEOMP -DMPI -D_HDF5_ -I/share/pkg.8/hdf5/1.10.10/install/include -Ifinclude   -c     ../../src/src_xs/m_putgetexcitons.f90
../../src/src_xs/m_putgetexcitons.f90:427:96:

       call mpi_bcast(vqlmt_,shape(vqlmt_), MPI_DOUBLE_PRECISION,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                                1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:428:89:

       call mpi_bcast(ngridk_,shape(ngridk_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                         1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:502:87:

       call mpi_bcast(ikmap_,shape(ikmap_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                       1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:503:95:

       call mpi_bcast(ik2ikqmtm_,shape(ik2ikqmtm_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                               1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:504:95:

       call mpi_bcast(ik2ikqmtp_,shape(ik2ikqmtm_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                               1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:505:91:

       call mpi_bcast(kousize_,shape(kousize_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                           1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:506:91:

       call mpi_bcast(koulims_,shape(koulims_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                           1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:507:85:

       call mpi_bcast(smap_,shape(smap_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                     1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:508:93:

       call mpi_bcast(smap_rel_,shape(smap_rel_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                             1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:509:82:

       call mpi_bcast(vkl0_,shape(vkl0_), MPI_REAL,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                  1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:510:80:

       call mpi_bcast(vkl_,shape(vkl_), MPI_REAL,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:511:88:

       call mpi_bcast(evalstmp,shape(evalstmp), MPI_REAL,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                        1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:598:94:

     call mpi_bcast(evals_, shape(evals_), MPI_DOUBLE_COMPLEX,0, mpiglobal%comm,mpiglobal%ierr)
                                                                                              1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:599:92:

     call mpi_bcast(rvec_, shape(rvec_), MPI_DOUBLE_COMPLEX,0, mpiglobal%comm,mpiglobal%ierr)
                                                                                            1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)
../../src/src_xs/m_putgetexcitons.f90:602:89:

     call mpi_bcast(koulims_,shape(koulims_), MPI_INTEGER,0,mpiglobal%comm,mpiglobal%ierr)
                                                                                         1
Error: There is no specific subroutine for the generic ‘mpi_bcast’ at (1)

any idea how to fix?