MPI error with OpenMP

Hi All,

I built lammps-11Jan13 with OpenMPI and OpenMP on Macbook (OS X 10.8.2).

Execution of lammps with OpenMPI and OpenMP got MPI error as follows .
Meanwhile execution with only OpenMPI got no error.

this is a bug of lammps?

i attach input files.

regards,
chibaf (Fumihiro CHIBA)

*Using MPI and OpenMP, I executed the following command:
MacBook2009:lammps-err-docs-2012-11-10 chibaf$ time mpirun -x OMP_NUM_THREADS=2 -np 2 lmp_openmpi_11Jan13 -sf omp -in in.fcc111
LAMMPS (11 Jan 2013)
  using 2 OpenMP thread(s) per MPI task
[MacBook2009:12099] *** Process received signal ***
[MacBook2009:12099] Signal: Segmentation fault: 11 (11)
[MacBook2009:12099] Signal code: Address not mapped (1)
[MacBook2009:12099] Failing at address: 0x0
[MacBook2009:12099] [ 0] 2 libsystem_c.dylib 0x00007fff8d30d8ea _sigtramp + 26
[MacBook2009:12099] [ 1] 3 ??? 0x0000000000000000 0x0 + 0
[MacBook2009:12099] [ 2] 4 lmp_openmpi_11Jan13 0x0000000106ed08f1 _ZN9LAMMPS_NS6LAMMPSC2EiPPcP19ompi_communicator_t + 4321
[MacBook2009:12099] [ 3] 5 lmp_openmpi_11Jan13 0x0000000106ed7912 main + 66
[MacBook2009:12099] [ 4] 6 libdyld.dylib 0x00007fff8ece37e1 start + 0
[MacBook2009:12099] *** End of error message ***
[MacBook2009:12100] *** Process received signal ***
[MacBook2009:12100] Signal: Segmentation fault: 11 (11)
[MacBook2009:12100] Signal code: Address not mapped (1)
[MacBook2009:12100] Failing at address: 0x0
[MacBook2009:12100] [ 0] 2 libsystem_c.dylib 0x00007fff8d30d8ea _sigtramp + 26
[MacBook2009:12100] [ 1] 3 ??? 0x0000000000000000 0x0 + 0
[MacBook2009:12100] [ 2] 4 lmp_openmpi_11Jan13 0x000000010f9178f1 _ZN9LAMMPS_NS6LAMMPSC2EiPPcP19ompi_communicator_t + 4321
[MacBook2009:12100] [ 3] 5 lmp_openmpi_11Jan13 0x000000010f91e912 main + 66
[MacBook2009:12100] [ 4] 6 libdyld.dylib 0x00007fff8ece37e1 start + 0
[MacBook2009:12100] *** End of error message ***

data.fcc111 (79.3 KB)

in.fcc111 (1.04 KB)

Hi All,

I built lammps-11Jan13 with OpenMPI and OpenMP on Macbook (OS X 10.8.2).

Execution of lammps with OpenMPI and OpenMP got MPI error as follows .
Meanwhile execution with only OpenMPI got no error.

this is a bug of lammps?

possibly. there have been many changes related to long-range
electrostatics, but nobody has cared to update the corresponding
code in USER-OMP or provided sufficient information to make
it easy to do the equivalent compatible changes in the /omp styles. :frowning:
for the most part, there is a compatibility layer in in place that
still keeps things working for inputs that i have to test. but there
may be other changes that i have not been able to test and,
so far, i have not have found the time to reverse engineer what
was done and update USER-OMP accordingly.

there are two things that you can try:

- set the coulomb cutoff explicitly rather than
- turn off tabulated coulomb.

i attach input files.

thanks. i'll check them out.

axel.

hi again,

turns out the problem is related to steve's recent
change to allow arbitrary length input.

please replace the file input.cpp with the version
attached to this e-mail or apply the change listed below.

axel.

[[email protected]... src]$ git diff
diff --git a/src/input.cpp b/src/input.cpp
index 58c9bb4..e38a939 100644
--- a/src/input.cpp
+++ b/src/input.cpp
@@ -239,6 +239,8 @@ void Input::file(const char *filename)

char *Input::one(const char *single)
{
+ int len = strlen(single);
+ if (maxline-len < 2) reallocate(line,maxline,len+2);
   strcpy(line,single);

   // echo the command unless scanning for label

input.cpp.gz (8.08 KB)

hi axel,

I applied your attached file to lammps-11Jan13.
And I tried to execute lammps with OpenMP and MPI.

The problem was fixed as follows.

thank you.

regards,
chibaf (Fumihiro CHIBA)

-------------------->8----------------------
MacBook2009:lammps-err-docs-2012-11-10 chibaf$ time mpirun -x OMP_NUM_THREADS=2 -np 2 lmp_openmpi_11Jan13 -sf omp -in in.fcc111
LAMMPS (11 Jan 2013)
  using 2 OpenMP thread(s) per MPI task
Reading data file ...
  orthogonal box = (0 0 0) to (26.7974 30.943 26.7974)
  1 by 2 by 1 MPI processor grid
  1536 atoms
512 atoms in group 1
1024 atoms in group 2
Setting atom values ...
  512 settings made for charge
Setting atom values ...
  1024 settings made for charge
PPPM initialization ...
WARNING: System is not charge neutral, net charge = 1843.2 (pppm.cpp:256)
  G vector (1/distance)= 0.323877
  grid = 36 40 36
  stencil order = 5
  estimated absolute RMS force accuracy = 0.000168372
  estimated relative force accuracy = 1.16928e-05
  using double precision FFTs
  3d grid and FFT values/proc = 49923 25920
Hybrid pair style last /omp style morse
Last active /omp style is kspace_style pppm/omp
Setting up run ...
Memory usage per processor = 8.27377 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
       0 200 -50424.764 0 -50385.081 1205026.6 22220.213
      10 2026.1232 -50393.453 0 -49991.441 1149225.3 22241.149
      20 1548.0066 -50302.745 0 -49995.598 1079161 22302.403
      30 1216.2509 -50157.143 0 -49915.822 1026968.3 22401.509
      40 943.7443 -49961.998 0 -49774.746 985017.02 22535.791
      50 721.49552 -49724.829 0 -49581.674 948937.31 22701.221
      60 541.32404 -49456.182 0 -49348.776 916140.83 22891.545
      70 396.76722 -49169.317 0 -49090.593 885450.87 23098.236
      80 282.89656 -48878.309 0 -48822.178 829025.67 23311.641
      90 198.63183 -48595.829 0 -48556.417 803174.03 23522.538
…...
-------------------->8----------------------

Will be in next patch - good catch.

Thanks,
Steve