Hello,
We use the PGI compilers first can anyone comment on lammps performance with PGI vs gcc 3.4?
Also the new update will not compile with PGI-6.1 or 7.0 To make it compile (and thus run) you must compile with -D_XOPEN_SOURCE=500 -D_ISOC99_SOURCE
If you look in /usr/include/feature.h (in RHEL4) What _XOPEN_SOURCE macros means. The -D_ISOC99_SOURCE is a PGI only problem.
Without these usleep() will not work in variable.cpp
Without -D_ISOC99_SOURCE round() in variable.cpp will be undefined.
It compiles just fine under g++. We run a AMD cluster, so we tend towards PGI compilers. Thus we did not try icpc (intel) compilers.
We compile like this:
make no-all
make yes-kspace
make yes-manybody
make yes-molecule
make yes-asphere
make yes-dpd
make -j5 linux (parallel make for fun works fine)
I hope this helps others, and that others who know better (I am just a admin not a user) can comment about PGI vs GNU compilers for lammps
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
[email protected]...
(734)936-1985
Hello,
brock,
We use the PGI compilers first can anyone comment on lammps
performance with PGI vs gcc 3.4?
don't use PGI. its c++ compiler is slow and quite broken.
gcc is more reliable and faster. with gcc there is one caveat,
though. it assumes ANSI-compliance to aliasing rules by default,
which is not true for all of LAMMPS. please have a look at, e.g.
Makefile.openmpi for how to deal with that in the most efficient way.
[...]
It compiles just fine under g++. We run a AMD cluster, so we tend
towards PGI compilers. Thus we did not try icpc (intel) compilers.
it is a myth that PGI is better on AMD. our group has been suffering
through a _lot_ of pain on cray xt3 machines due to PGI compilers being
the only supported fortran 90/95 compilers. with c/c++ there was luckily
the GNU compilers.
actually i'm getting the best timings on AMD machines when using intel
compilers and optimizing for pentium 3. intel by default does not assume
ansi-aliasing compliance, but turning that on, will give you a little
extra speedup. newer intel c/c++ compiler support the gcc syntax.
hope that helps,
axel.
> Hello,
brock,
> We use the PGI compilers first can anyone comment on lammps
> performance with PGI vs gcc 3.4?
don't use PGI. its c++ compiler is slow and quite broken.
gcc is more reliable and faster. with gcc there is one caveat,
though. it assumes ANSI-compliance to aliasing rules by default,
which is not true for all of LAMMPS. please have a look at, e.g.
Makefile.openmpi for how to deal with that in the most efficient way.
Ok will do,
[...]
> It compiles just fine under g++. We run a AMD cluster, so we tend
> towards PGI compilers. Thus we did not try icpc (intel) compilers.
it is a myth that PGI is better on AMD. our group has been suffering
through a _lot_ of pain on cray xt3 machines due to PGI compilers being
the only supported fortran 90/95 compilers. with c/c++ there was luckily
the GNU compilers.
Interesting, we do not see the same behavior most the time, but it will vary from code to code. Will admit I was surprised it compiled with pgCC at all.
actually i'm getting the best timings on AMD machines when using intel
compilers and optimizing for pentium 3.
Interesting, I have seen this a few times also, most codes floating around here (grad student one off codes, do not show this behavior but it is not uncommon).
intel by default does not assume
ansi-aliasing compliance, but turning that on, will give you a little
extra speedup. newer intel c/c++ compiler support the gcc syntax.
I will check with the user about getting a longer running example of what she is doing and do some timings thanks for your thoughts and experiences.
> > It compiles just fine under g++. We run a AMD cluster, so we tend
> > towards PGI compilers. Thus we did not try icpc (intel) compilers.
>
>it is a myth that PGI is better on AMD. our group has been suffering
>through a _lot_ of pain on cray xt3 machines due to PGI compilers being
>the only supported fortran 90/95 compilers. with c/c++ there was luckily
>the GNU compilers.
Interesting, we do not see the same behavior most the time, but it will vary
from code to code. Will admit I was surprised it compiled with pgCC at all.
true. most "small" codes may be ok, but for any of the "large" package
codes g++ wins a lot. have a look at, e.g., the NAMD wiki.
>actually i'm getting the best timings on AMD machines when using intel
>compilers and optimizing for pentium 3.
Interesting, I have seen this a few times also, most codes floating around
here (grad student one off codes, do not show this behavior but it is not
uncommon).
this is particularly true for the very large package codes and
particularly plane-wave pseudopotential density functional codes.
even on intel core2 architecture cpus i'm better with -march=pentium3
-mtune=core2 (i.e. by disabling SSE and related stuff, except for
BLAS/LAPACK and FFT).
I will check with the user about getting a longer running example of what
she is doing and do some timings thanks for your thoughts and experiences.
no problem, you're welcome.
cheers,
axel.