patch for python module

Installing as root:

Sorry, clash of philosophies! I don't have a package manager, I'm on OS X (I use macports if the package exists, but it often doesn't.) Generally, I just build and sudo make install everything. I am not used to multiple pythons; I just use the system python (which is also an Ubuntu concept, right?).

If users don't have root privileges they're already going to have a workaround; a ~/bin and ~/lib, or whatever. Just allow people to specify the install prefix with a CLI switch, and stress that you don't need to do it as root if that's problematic.

Install locations:

Step back a minute--- to site-packages?! First, we're talking about a dynamic linkage C/C++ library; that should go in /usr/lib or /usr/shared/lib. Second, we install lammps.py (just the python bit) to python land, which can be local or global site-packages without issue, since both are on the python path already.

This is the standard C lib / python bindings split; aptitude package 1, the C lib, goes into /usr/lib or somewhere else on LD_LIBRARY_PATH, and apt package 2, the python bindings, go into site-packages. Lo and behold we can link against the C library and import in python without further configuration.

Local vs Global install:

It depends whether you expect users to rebuild regularly. As a developer, sure, you want some hacky local install (although in my python development I just install everything to the system since it takes less than a second to do so, and all my random shells pick up the changes.) But _users_ of the software just want a library/binary dependence installed once and for all, don't they? The standard (most compatible) use case is to install all modules, build the executable, and dump in /usr/bin, no? This is what I did the first time, so I could try it out. This is surely how we would build a lammps .deb installer, if there were a demand for it. Updating contains the (long) build process and the (short) copy process again, which isn't significantly longer than just rebuilding.

In any case, making the build directory the install location is highly unorthodox; and requires people hack around with system configuration (env vars) anyway.

Copying stub libraries:

In setup_serial.py, you built these into one monolithic library binary. If your new 'make library' task doesn't do this, surely it should? The real MPI version (or any of the other libraries) can still be dynamically linked, and should just work since they're on the library path at build time. This will need separate make tasks for serial and MPI as usual.

Conclusion:

You are thinking of your 'customers' as developers, I think of them as users. When you treat users like developers they get confused, when you treat developers like users they are delighted because it's less work for them. As far as I can see it's not that much extra work to make the install process easier, and it doesn't exclude a more complex setup for those who want it; they just roll their own "install" step.

Joe

joe,

there is a saying "the road to hell is paved with good intentions".

almost everything that you are advocating are things that i have
learned in over 20 years of system administration of unix/linux
machines to avoid as much as possible. i have also learned that
it is impossible to convince people of this when they argue with
making things easy. so i will just stop here and shut up after
having stated my continued disagreement.
i am not in the mood for a flame war today.

have a nice day and good luck,
     axel.

ok - this was a useful discussion - I learned some new
things about libraries and Python

And this comment from Joe is compelling - I did have my
developer hat on:

You are thinking of your 'customers' as developers, I think of them as users.
When you treat users like developers they get confused, when you treat developers
like users they are delighted because it's less work for them. As far as I can see it's not
that much extra work to make the install process easier, and it doesn't exclude a more
complex setup for those who want it; they just roll their own "install" step.

So I just released a 20Aug12 patch that I'm hoping
makes everyone happy (ok, probably too much to hope for)

You can now do

make makeshlib make -f Makefile.shlib foo
% make install-python

This relies on any auxiliary LAMMPS lib being built
as a shared lib. The extra libs in LAMMPS itself now all use the -fPIC switch
when they create a lib*.a file as Axel suggested. I didn't realize that would
make them usable as either static or dynamic libs,
and as a bonus that they are then sucked into the master shared LAMMPS
lib, so that there is no issue with the loader needing an
extended LD_LIBRARY_PATH to find them. So this
killed 2 problems with one change. Is there any performance
penalty with libs like Reax,MEAM,etc being built with --fPIC if you're just
building stand-alone LAMMPS?

This means that after the first two steps, there are just
2 files Python needs to know about: lammps.py and lmplmp.so.

You can do the last step to run python/install.py and copy
them into site-packages, as sudo if needed. Or you can skip that step and
set the two environment variables.

Joe seemed to be saying it was not kosher to copy liblmp.so
into site-packages. But I think that many add-on packages
do this, at least on Linux boxes? I've got a few in mine from
other packages. It seems less desirable to
me to copy liblmp.so into some system dir like usr/local/lib.

Any further suggestions?

Thanks,
Steve

This relies on any auxiliary LAMMPS lib being built
as a shared lib. The extra libs in LAMMPS itself now all use the -fPIC switch
when they create a lib*.a file as Axel suggested. I didn't realize that would
make them usable as either static or dynamic libs,

actually, many packages/distributions build their lib*.a files
like this. makes things easier.

and as a bonus that they are then sucked into the master shared LAMMPS
lib, so that there is no issue with the loader needing an
extended LD_LIBRARY_PATH to find them. So this
killed 2 problems with one change. Is there any performance
penalty with libs like Reax,MEAM,etc being built with --fPIC if you're just
building stand-alone LAMMPS?

yes. theoretically -fPIC has a performance penalty.
you lose one general purpose register. that impact
is worst on 32-bit x86 where you go from 5 to 4 general
purpose registers. how much performance loss that
encompasses is difficult to tell. could be around 1%,
could be up to 10%. when linux moved from a.out to
ELF to allow easy relocations in shared libraries, it
was an average of 5% performance loss estimate.
on x86_64 it should be significantly less.
if people worry about these differences, they should
also compile with -fno-rtti -fno-exceptions (incompatible
only with AtC, IIRC) and for non -fpic compiles also
use -fomit-frame-pointer. so there is always a tradeoff
between convenience and performance.

i would expect that anybody using the python wrappers
is not looking for the absolute performance on the lammps
side, but for the benefits of rapid program design though
python and thus shorter test/development cycles.

[...]

Any further suggestions?

not from me. i hope that we soon see some
great python based lammps applications.

i am currently hanging out with theoretical physicists
that live firmly in the world of fortran and am now
struggling with teaching their badly scaling fortran codes
some new things without going completely insane over it.

programming predominantly in C/C++ for several years
has most certainly spoiled me rotten.

ciao,
   axel.