Installing as root:
Sorry, clash of philosophies! I don't have a package manager, I'm on OS X (I use macports if the package exists, but it often doesn't.) Generally, I just build and sudo make install everything. I am not used to multiple pythons; I just use the system python (which is also an Ubuntu concept, right?).
If users don't have root privileges they're already going to have a workaround; a ~/bin and ~/lib, or whatever. Just allow people to specify the install prefix with a CLI switch, and stress that you don't need to do it as root if that's problematic.
Install locations:
Step back a minute--- to site-packages?! First, we're talking about a dynamic linkage C/C++ library; that should go in /usr/lib or /usr/shared/lib. Second, we install lammps.py (just the python bit) to python land, which can be local or global site-packages without issue, since both are on the python path already.
This is the standard C lib / python bindings split; aptitude package 1, the C lib, goes into /usr/lib or somewhere else on LD_LIBRARY_PATH, and apt package 2, the python bindings, go into site-packages. Lo and behold we can link against the C library and import in python without further configuration.
Local vs Global install:
It depends whether you expect users to rebuild regularly. As a developer, sure, you want some hacky local install (although in my python development I just install everything to the system since it takes less than a second to do so, and all my random shells pick up the changes.) But _users_ of the software just want a library/binary dependence installed once and for all, don't they? The standard (most compatible) use case is to install all modules, build the executable, and dump in /usr/bin, no? This is what I did the first time, so I could try it out. This is surely how we would build a lammps .deb installer, if there were a demand for it. Updating contains the (long) build process and the (short) copy process again, which isn't significantly longer than just rebuilding.
In any case, making the build directory the install location is highly unorthodox; and requires people hack around with system configuration (env vars) anyway.
Copying stub libraries:
In setup_serial.py, you built these into one monolithic library binary. If your new 'make library' task doesn't do this, surely it should? The real MPI version (or any of the other libraries) can still be dynamically linked, and should just work since they're on the library path at build time. This will need separate make tasks for serial and MPI as usual.
Conclusion:
You are thinking of your 'customers' as developers, I think of them as users. When you treat users like developers they get confused, when you treat developers like users they are delighted because it's less work for them. As far as I can see it's not that much extra work to make the install process easier, and it doesn't exclude a more complex setup for those who want it; they just roll their own "install" step.
Joe