Memory Access Error in GULP MPI

Hello,

I’ve been trying to use GULP with MPI support and finally got it compiled. However, when I run with mpirun I get an error code spammed to the terminal:

[80310e306ff2:23940] Read -1, expected 3145728, errno = 1

It does not kill the program and just keeps printing different permutations of this error. Based on this GULP is trying to run some command the system does not like. Based on the error I think its a memory read but I can’t be sure.

Does anyone know anything about this?

Thanks,
Ethan

1 Like

Note I am not using the < operator when executing the command which is what I originally thought the error was. I am also able to run the example files fine. Inside of my input file I am running the command opti conv phon prop and the execution breaks in the optimization step.

Without access to your input files I’m afraid it’s impossible to know what’s going on. The error messages are from MPI, so that’s hard for GULP to control and down to the specific MPI you are using and how it was compiled. What triggered it may be an issue with the input, and so if you can post this then we can take a look to see what the problem is.

I copied my input below. It is 4096 atoms of amorphous silicon in the SW potential. My goal is to get the phonon modes. Both the command in the file currently and just “phon” give the error. I think the first error is thrown by the optimization and when just “phon” is used the issue is in the potential but that is all I could figure out.

Thanks!

aSi_4096.gin (132.1 KB)

I think the problems you are having may be due to an error in the input. You’ve specified what look like cartesian coordinates for the atoms, but used “fractional” as the option to input them and so the structure will be a horrible mess. I can tell that some atoms are on top of each other (from the appearance of constraints in the output that you would have got) and so this means atoms with zero distance, which is a recipe for disaster and obviously unphysical. If you change “fractional” to “cartesian” things may well work OK.

1 Like

Well that’s embarrassing haha. Had a bug in my code to make the gin file.

Thank you very much!

Unfortunately, that did not fix the issue. It definitely was not helping, but I still get read errors as soon as optimization starts or the potential gets invoked.

I attached a smaller input file that I know runs correctly when I do not use MPI. I have not been able to get the 4096 to complete single threaded.

aSi216.gin (10.8 KB)

FWIW, your input runs perfectly in parallel using the Materials Studio executable.

Oh thanks. That indicates I compiled something wrong, but I can run all of the GULP examples just fine.

Material Studio is a paid thing correct?

Very much so (it is a commercial product), see BIOVIA Materials Studio - BIOVIA - Dassault systèmes® for more information

1 Like

I can confirm that the aSi216 input runs fine on my Mac with in parallel & so I suspect the issue is to do with the MPI build on your system (e.g. not all of the parallel libraries have been built with the same version of MPI and so don’t talk to each other correctly).

In regards to your 4096 atom run on a single core, you might want to consider the size of the problem. To do a phonon calculation at k points means you will have a complex matrix that is more than 12,000 x 12,000, so you might want to check whether you have enough memory on your computer to handle this (it will also take some time to diagonalise as well).

Yeah that’s why I was trying to use MPI. I’ll try re-compiling everything in a clean linux install. Hopefully that fixes things.

Thanks for all the help.