Segmentation fault with read_data on multiple processors

Hello,

I have ran into a problem with the latest version of LAMMPS (25 Jun 2011).
If I run my input script on multiple processors, reading in a data file I get a segmentation fault. If I run it on a single processor reading in the data file or on multiple processors generating the data in LAMMPS it works fine.
The same script works without any errors using a previous LAMMPS version (31 Mar 2011), compiled with the same settings.

I have attached the segmentation fault log and my files at the end of this email.

Thanks for any help,
Jenni

The segmentation fault I get is the following:

LAMMPS (25 Jun 2011)
Reading data file ...
  orthogonal box = (0 0 0) to (17.5 17.5 17.5)
[dev-intel09:30240] *** Process received signal ***
[dev-intel09:30240] Signal: Segmentation fault (11)
[dev-intel09:30240] Signal code: Address not mapped (1)
[dev-intel09:30240] Failing at address: 0x2aaa89eb0cd8
  1 by 1 by 2 processor grid
[dev-intel09:30240] [ 0] /lib64/libpthread.so.0 [0x2b5bd12c0c00]
[dev-intel09:30240] [ 1] lmp_hpcc(_ZN9LAMMPS_NS8ReadData5atomsEv+0x106) [0x701756]
[dev-intel09:30240] [ 2] lmp_hpcc(_ZN9LAMMPS_NS8ReadData7commandEiPPc+0x1a51) [0x6ff401]
[dev-intel09:30240] [ 3] lmp_hpcc(_ZN9LAMMPS_NS5Input15execute_commandEv+0x1150) [0x5e7a30]
[dev-intel09:30240] [ 4] lmp_hpcc(_ZN9LAMMPS_NS5Input4fileEv+0x2c2) [0x5eb4f2]
[dev-intel09:30240] [ 5] lmp_hpcc(main+0xad) [0x5f6c4d]
[dev-intel09:30240] [ 6] /lib64/libc.so.6(__libc_start_main+0xf4) [0x2b5bd13e9164]
[dev-intel09:30240] [ 7] lmp_hpcc(_ZNSt8ios_base4InitD1Ev+0x39) [0x4623c9]
[dev-intel09:30240] *** End of error message ***

Input script (here you see how I generate the atoms or read the input data):

# script for bead-spring polymer simulation

# run settings and read input positions
units lj
atom_style atomic
boundary p p p
#lattice fcc 1
#region 1 block 0 10 0 10 0 10
#create_box 50 1
#create_atoms 1 box
read_data data.polymer
mass * 1.0 #type atom, mass

# potentials
pair_style lj/cut 2.5
pair_coeff * * 1.0 1.0

velocity all create v_tempmax 423452 dist gaussian

# initial minimization
thermo_style one
thermo 500000
minimize 1.0e-12 1.0e-6 1000 10000
run 500000

data.polymer:

# file for lammps parameters and positions

          24 atoms
           1 atom types
  0 17.500000 xlo xhi
  0 17.500000 ylo yhi
  0 17.500000 zlo zhi

Atoms

           1 1 0.0000000E+00 0.0000000E+00 0.0000000E+00
           2 1 0.0000000E+00 3.750000 3.750000
           3 1 3.750000 0.0000000E+00 3.750000
           4 1 3.750000 3.750000 0.0000000E+00
           5 1 0.0000000E+00 0.0000000E+00 7.500000
           6 1 0.0000000E+00 3.750000 11.25000
           7 1 3.750000 0.0000000E+00 11.25000
           8 1 3.750000 3.750000 7.500000
           9 1 0.0000000E+00 7.500000 0.0000000E+00
          10 1 0.0000000E+00 11.25000 3.750000
          11 1 3.750000 7.500000 3.750000
          12 1 3.750000 11.25000 0.0000000E+00
          13 1 0.0000000E+00 7.500000 7.500000
          14 1 0.0000000E+00 11.25000 11.25000
          15 1 3.750000 7.500000 11.25000
          16 1 3.750000 11.25000 7.500000
          17 1 7.500000 0.0000000E+00 0.0000000E+00
          18 1 7.500000 3.750000 3.750000
          19 1 11.25000 0.0000000E+00 3.750000
          20 1 11.25000 3.750000 0.0000000E+00
          21 1 7.500000 0.0000000E+00 7.500000
          22 1 7.500000 3.750000 11.25000
          23 1 11.25000 0.0000000E+00 11.25000
          24 1 11.25000 3.750000 7.500000

Hello,

I have ran into a problem with the latest version of LAMMPS (25 Jun 2011).
If I run my input script on multiple processors, reading in a data file I get a segmentation fault. If I run it on a single processor reading in the data file or on multiple processors generating the data in LAMMPS it works fine.

sorry, but i cannot reproduce it on my machine.
no segfaults regardless of how many processors i use.

what OS and compiler are you using?

axel.

I do see a glitch that was introduced recently. I'll post
a patch today - so please try again after that and see
if it fixes your problem.

Steve