Problem on building Lammps with Python

Dear all,

I try to compile Lammps with the python module mode in the configuration of a virtual environment. But Lammps (lmp_mpi) does not find the liblammps_mpi.so. I do not know how to solve the problem.

I use the Lammps version 29Oct2020 with openmpi-4.0.2 and ubuntu 16.04. I had to add “std=c++11” in the makefile in folder MINE. Lammps/openmpi compiled with make and the packages KSPACE MANYBODY MOLECULE RIGD works.

As explained in the Manual 29Oct2020 :

1/ I create the virtual environment with “python3 -m venv $HOME/myenv”

2/ The virtual environment is activated with “source $HOME/myenv/bin/activate” :
which python :
/home/pierre/myenv/bin/python
in ~/myenv/bin:
ls -l
-rw-rw-r-- 1 pierre pierre 1894 Sep 4 14:21 activate
-rw-rw-r-- 1 pierre pierre 843 Sep 4 14:21 activate.csh
-rw-rw-r-- 1 pierre pierre 1983 Sep 4 14:21 activate.fish
-rw-rw-r-- 1 pierre pierre 8834 Sep 4 14:21 Activate.ps1
-rwxrwxr-x 1 pierre pierre 235 Sep 4 14:21 pip
-rwxrwxr-x 1 pierre pierre 235 Sep 4 14:21 pip3
-rwxrwxr-x 1 pierre pierre 235 Sep 4 14:21 pip3.9
lrwxrwxrwx 1 pierre pierre 7 Sep 4 14:21 python → python3
lrwxrwxrwx 1 pierre pierre 35 Sep 4 14:21 python3 → /home/pierre/miniconda3/bin/python3
lrwxrwxrwx 1 pierre pierre 7 Sep 4 14:21 python3.9 → python3

3/ In the folder $HOME/lammps/src, the 4 packages are installed with “make yes-KSPACE yes-….”

4/ I compile lammps in share mode with “make -j 4 mode=shared mpi_2” (mpi_2 is the modified makefile in the directory MINE).
As usual I get two warnings :
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/variable.cpp
…/variable.cpp: In member function ‘int LAMMPS_NS::Variable::next(int, char**)’:
…/variable.cpp:712:27: warning: ignoring return value of ‘size_t fread(void*, size_t, size_t, FILE*)’, declared with attribute warn_unused_result [-Wunused-result]
fread(buf,1,64,fp);
^
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/read_restart.cpp
…/read_restart.cpp: In member function ‘void LAMMPS_NS::ReadRestart::check_eof_magic()’:
…/read_restart.cpp:1198:33: warning: ignoring return value of ‘size_t fread(void*, size_t, size_t, FILE*)’, declared with attribute warn_unused_result [-Wunused-result]
fread(str,sizeof(char),n,fp);
^
At the end of the compilation:
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/variable.cpp
size …/lmp_mpi_2
text data bss dec hex filename
2734 712 8 3454 d7e …/lmp_mpi_2
make[1] : on quitte le répertoire « /home/pierre/lammps-29Oct20/src/Obj_shared_mpi_2 »

=> (myenv) (base) pierre@mrsm2p2itc-s30:~/lammps-29Oct20/src$ ls -l li*

  -rw-rw-r-- 1 pierre pierre 177235534 Sep  1 16:43 liblammps_mpi_2.a
  -rwxrwxr-x 1 pierre pierre  63762208 Sep  4 14:28 liblammps_mpi_2.so
  -rw-rw-r-- 1 pierre pierre 131869982 Oct  8  2021 liblammps_serial_2.a
  lrwxrwxrwx 1 pierre pierre        18 Sep  4 14:28 liblammps.so -> liblammps_mpi_2.so

5/ I install python in module mode with “make install-python”:

Installing LAMMPS Python module version 29Oct2020 into site-packages folder
running install
running build
running build_py
creating build
creating build/lib
copying lammps.py → build/lib
running install_lib
copying build/lib/lammps.py → /home/pierre/myenv/lib/python3.9/site-packages
byte-compiling /home/pierre/myenv/lib/python3.9/site-packages/lammps.py to lammps.cpython-39.pyc
running install_data
copying /home/pierre/lammps-29Oct20/src/liblammps.so → /home/pierre/myenv/lib/python3.9/site-packages
running install_egg_info
Writing /home/pierre/myenv/lib/python3.9/site-packages/lammps-29Oct2020-py3.9.egg-info

==============================================================================> At this stage, all seem all right. liblammps.so is copied in $HOME/myenv/lib/python3.9/site-packages and $HOME/myenv/lib64/python3.9/site-packages.

=> However, liblammps_mpi_2.so is copied in $HOME/myenv/lib/python3.9/site-packages but not in $HOME/myenv/lib64/python3.9/site-packages. <=

=> When I use Lammps with mpirun -np 2 lmp_mpi_2 -in TEST_LAMMPS_MPI_PYTHON.in, I get :

lmp_mpi_2: error while loading shared libraries: liblammps_mpi_2.so: cannot open shared object file: No such file or directory

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.

lmp_mpi_2: error while loading shared libraries: liblammps_mpi_2.so: cannot open shared object file: No such file or directory

=> If I create a Lammps instance :

import lammps
lmp = lammps.lammps()
LAMMPS (29 Oct 2020)
lmp = lammps.lammps(name=‘mpi_2’)
Traceback (most recent call last):
File “/home/pierre/myenv/lib/python3.9/site-packages/lammps.py”, line 248, in init
self.lib = CDLL(join(modpath,"liblammps_s" name + lib_ext),
File “/home/pierre/miniconda3/lib/python3.9/ctypes/init.py”, line 382, in init
self._handle = _dlopen(self._name, mode)
OSError: /home/pierre/myenv/lib/python3.9/site-packages/liblammps_mpi_2.so: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “”, line 1, in
File “/home/pierre/myenv/lib/python3.9/site-packages/lammps.py”, line 254, in init
self.lib = CDLL("liblammps_s" name + lib_ext,RTLD_GLOBAL)
File “/home/pierre/miniconda3/lib/python3.9/ctypes/init.py”, line 382, in init
self._handle = _dlopen(self._name, mode)
OSError: liblammps_mpi_2.so: cannot open shared object file: No such file or directory

=> If I copy liblammps_mpi_2.so to $HOME/myenv/bin/lib64/python3.9/sites-packages :

>>> lmp = lammps.lammps(name='mpi_2')
LAMMPS (29 Oct 2020)

=> But the problem remains :

mpirun -np 2 lmp_mpi_2 -in TEST_LAMMPS_MPI_PYTHON.in gives :

lmp_mpi_2: error while loading shared libraries: liblammps_mpi_2.so: cannot open shared object file: No such file or directory

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.

lmp_mpi_2: error while loading shared libraries: liblammps_mpi_2.so: cannot open shared object file: No such file or directory
(myenv) (base) pierre@mrsm2p2itc-s30:~/TEST_LAMMPS_PROG_PYTHON/TEST_MPI_2$

Sorry, but I suspect that none of the LAMMPS developers will want to spend the time to debug your use case for that old a LAMMPS version. For the current version of LAMMPS you should just use “make install-python” and it should detect your virtual environment and install the library as liblammps.so so it can be loaded without the name argument. We also recommend using CMake for compilation.

Thank you for your answer.

I do not think that il is a problem of version (2020 is not so old). I used “make install-python” after compilling Lammps. liblammps.so is copied in ~/myenv/lib and lib64/pyhon3.9/site-packages. But when I try to run Lammps with openmpi, Lammps try to find liblammps_mpi_2.so. In lammps/src, there is a link between liblammps.so and liblammps_mpi_2.so (liblammps.so → liblammps_mpi_2.so).

I have the same problem if I choose to leave the shared library and Python modules in the source/compilation folders. For this I export PYTHONPATH and LD_LIBRARY_PATH at the right place (without using “make install-python”).

considering how rapidly LAMMPS python interface has been refactored in recent years, 2020 is very old. The “make install-python” procedure was refactored and rewritten to use setuptools instead of the obsolete distutils was done in jan/feb 2022.

The fact that you are running on a very much outdated Ubuntu 16.04 (already Ubuntu 18.04 LTS is no longer supported) is making matters (much) worse.

Update: I just checked in the git repository. The LAMMPS python support was massively refactored from a single file into a proper package starting on 15 December 2020. So with your 29 October 2020 version you are missing a major part of how LAMMPS with python is working currently.

As far as I can from the limited information provided, the problem is because you are using “name=‘mpi_2’” in the cmdargs option when creating the LAMMPS instance, but you should not been using that option when in a virtual environment.

However, my previous statement stands. We (as in “we, the LAMMPS developers”) will not want to spend ant time on resolving such issues unless you can reproduce them with the current version of LAMMPS.

Taking into account of your observations, I used the lastest stable version (i.e. 2Aug2023). I compiled Lammps with openmpi 3.1.4 (gcc 8.1) and python 3.8.6 in virtual env. It seems that I succeeded.

However, when I used the example “fix_python_invoke.in” (mpirun -np 2 lmp_mpi -in fix_python_invoke.in), I get :
lmp_mpi: error while loading shared libraries: liblammps_mpi.so: cannot open shared object file: No such file or directory
lmp_mpi: error while loading shared libraries: liblammps_mpi.so: cannot open shared object file: No such file or directory

I made the compilation on a cluster.
1/ I loaded two modules for openmpi and python.
2/ I used “python3 -m venv/$HOME/myenv” and source “$HOME/myenv/bin/activate”
3/ In “lammps/src” I compiled lammps with “make mode=shared mpi” and make install-python

At the end of the installation I get :
Successfully built lammps-2023.8.2-cp38-cp38-linux_x86_64.whl
Installing wheel into virtual environment
Processing ./lammps-2023.8.2-cp38-cp38-linux_x86_64.whl
Installing collected packages: lammps
Successfully installed lammps-2023.8.2

To run Lammps in another terminal,
1/ I copied the executable lmp_mpi and the example scripte in a folder
2/ I used a slurm command in order to be in interactive mode
3/ I loaded the two modules for openmpi and python.
4/ I activated the virtual env.
5/ I used “mpirun -np 2 lmp_mpi -in fix_python_invoke.in”

That has nothing to do with python, but is due to your lmp_mpi executable being linked to a shared library (liblammps_mpi.so), but that library cannot be found by the shared linker since it is not in a system folder or in a folder listed in LD_LIBRARY_PATH. This does not affect the LAMMPS python module because a copy of liblammps.so is placed right next to the python code in the site-packages lammps folder.

The specific example you are referring to, is a bit more complex, since it runs python from LAMMPS and then imports the LAMMPS module in python, but at that step it will not import the LAMMPS shared library that is bundled with the module, but rather directly use a pointer of the LAMMPS class instance that is created by lmp_mpi. This is some of the complexities resulting from having the PYTHON package as C++ source code that embeds a Python interpreter into LAMMPS itself, versus the LAMMPS Python module (which is written in Python using ctypes) that can either be imported into the embedded python interpreter or a standalone python interpreter.

As simple workaround to avoid the ‘cannot open shared object file’ issue would be to compile LAMMPS twice. Once in mode=shared and then you can do make install-python and then in mode=static (ie. the default) where you then would get a LAMMPS executable that is linked to a static version of the LAMMPS library and no LD_LIBRARY_PATH changes are required.

It seems that the static compilation overwrites the shared mode lmp_mpi and I get an error message. I have to point out that cmake is not installed in the cluster.

mpirun -np 2 lmp_mpi -in fix_python_invoke.in :

LAMMPS (2 Aug 2023)
Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962
Created orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
1 by 1 by 2 MPI processor grid
Created 4000 atoms
using lattice units in orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
create_atoms CPU = 0.002 seconds

==> ERROR: LAMMPS is not built with Python embedded (…/variable.cpp:879) <==

Last command: python end_of_step_callback here “”"
from future import print_function
from lammps import lammps
def end_of_step_callback(lmp):
L = lammps(ptr=lmp)
t = L.extract_global(“ntimestep”)
print(“### END OF STEP ###”, t)
def post_force_callback(lmp, v):
L = lammps(ptr=lmp)
t = L.extract_global(“ntimestep”)
print(“### POST_FORCE ###”, t)
“”"

Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[1987,1],0]
Exit code: 1

This is yet another different error that is due to not including the PYTHON package when compiling LAMMPS.

You advised me compile LAMMPS twice. This was I did :

1/ LAMMPS compilation in shared mode with openmpi
2/ installation of python with “make install-python”
3/ LAMMPS compilation in static mode with openmpi

So, the question is how to compile LAMMPS with python in virtual environment with make, considering that cmake is not installed in the cluster. Should we use the command “export …” ?

I have forgotten to specify that the compilation is carried out in the /scratch folder and not in the /home one.

There are multiple unrelated errors that need to be addressed.

  • installation of the LAMMPS python module so that loading it can “find” the liblammps.so shared library file (this is corrected with using the current LAMMPS version)
  • incorrect use of “name=” setting in “cmdargs” when loading the LAMMPS python module. This is avoided by not using that setting. It is not required because the “make install-python” step will copy the shared library under the generic name without a machine suffix.
  • running the shared mode LAMMPS executable without setting LD_LIBRARY_PATH as needed (this is avoided by building LAMMPS as second time in static mode)
  • running an input example that not only uses the LAMMPS python module but also the LAMMPS PYTHON package. This is corrected by doing “make yes-python” before compiling either of the binaries. However, this step you have not (yet) done. In fact, depending on what your use case is, you may need to include other packages, too.

The problem here is that you are conflating the different issues into a single one and are confusing what is the solution to which problem.

This would not be a problem and does not make a difference. If you have the tools available to download and build LAMMPS from source, you could just as well download and install CMake. However, that does not as such address the issue of installing packages that are required by your (test) inputs.

This makes no difference.

I have forgotten the python package.
What I want to do with python is to write every n femto-seconds in binary files the position (c_dsp [1],[2],[3] computed with “compute dsp all displace/atom”) and the velocity of He atoms stricking a SIO2 surface. May be that the python package is necessary because of the extraction of computed values.

I will compile Lammps with python package and try to run fix_python invoke. I will tell you if it works.

The compilation with the python package failed. The procedure is the following

A/
module load userspace/all
module load openmpi/gcc81/psm2/3.1.4
module load python3/3.8.6

B/ in lammps/src :
make no-all purge
python3 -m venv /scratch/pmagnico/myenv
source /scratch/pmagnico/myenv/bin/activate
make yes-PYTHON
make -j 5 mode=shared mpi

I get :
(myenv) [pmagnico@login01 src] make yes-PYTHON Installing package PYTHON (myenv) [pmagnico@login01 src] make -j 5 mode=shared mpi
Gathering installed package information (may take a little while)
make[1] : on entre dans le répertoire « /scratch/pmagnico/lammps-2Aug2023/src »
Gathering git version information
make[1] : on quitte le répertoire « /scratch/pmagnico/lammps-2Aug2023/src »
Compiling LAMMPS for machine mpi
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
make[1] : on entre dans le répertoire « /scratch/pmagnico/lammps-2Aug2023/src/Obj_shared_mpi »
make[1] : on quitte le répertoire « /scratch/pmagnico/lammps-2Aug2023/src/Obj_shared_mpi »
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
make[1] : on entre dans le répertoire « /scratch/pmagnico/lammps-2Aug2023/src/Obj_shared_mpi »
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLMP_PYTHON -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/main.cpp
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLMP_PYTHON -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/variable.cpp
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLMP_PYTHON -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/atom.cpp
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLMP_PYTHON -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/input.cpp
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLMP_PYTHON -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/read_restart.cpp
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
mpicxx -g -O3 -std=c++11 -fPIC -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -DLMP_PYTHON -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1 -c …/pair_python.cpp
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
In file included from …/pair_python.cpp:27:
…/python_compat.h:17:10: erreur fatale: Python.h : Aucun fichier ou dossier de ce type
#include <Python.h>
^~~~~~~~~~
compilation terminée.
make[1]: *** [pair_python.o] Erreur 1
make[1]: *** Attente des tâches non terminées…
make[1] : on quitte le répertoire « /scratch/pmagnico/lammps-2Aug2023/src/Obj_shared_mpi »
make: *** [mpi] Erreur 2

If I copy the file lib/python/Makefile.lammps.python3 in Makefile.lammps leads t the same errors.

That means, the machine you are compiling on has a broken python installation. I cannot fix that from remote. You can confirm that by running

python-config --includes

and

python-config --ldflags --embed

On my machine I get for these:

$ python-config --includes
-I/usr/include/python3.11 -I/usr/include/python3.11
$ python-config --ldflags --embed
 -L/usr/lib64 -lpython3.11 -ldl  -lm 

If you can manually figure out what would be the equivalent settings on your machine, you can manually build the lib/python/Makefile.lammps file. In my case this “manual” file would have to be (based on the python-config output from above):

python_SYSINC = -I/usr/include/python3.11 -I/usr/include/python3.11
python_SYSLIB =  -L/usr/lib64 -lpython3.11 -ldl  -lm 
python_SYSPATH =

With these two commands, I get error messages :

python-config --includes :
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax

python-config --ldflags --embed :
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/site.py”, line 178
file=sys.stderr)
^
SyntaxError: invalid syntax

I will contact the “mesocentre” in order to solve the problem.

I changed manually the Makefile.lammps and I compiled Lammps with the Python package.

Running Lammps with fix_python_invoke.in, I get errors messages :

mpirun -np 2 lmp_mpi -in fix_python_invoke.in

LAMMPS (2 Aug 2023)
Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962
Created orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
1 by 1 by 2 MPI processor grid
Created 4000 atoms
using lattice units in orthogonal box = (0 0 0) to (16.795962 16.795962 16.795962)
create_atoms CPU = 0.001 seconds
Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule
Neighbor list info …
update: every = 20 steps, delay = 0 steps, check = no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d
bin: standard
Setting up Verlet run …
Unit style : lj
Current step : 0
Time step : 0.005
Per MPI rank memory allocation (min/avg/max) = 2.805 | 2.805 | 2.805 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
Traceback (most recent call last):
File “”, line 9, in post_force_callback
File “/scratch/pmagnico/myenv/lib/python3.8/site-packages/lammps/core.py”, line 155, in init
self.lib.lammps_extract_setting.argtypes = [c_void_p, c_char_p]
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/ctypes/init.py”, line 386, in getattr
func = self.getitem(name)
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/ctypes/init.py”, line 391, in getitem
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: lmp_mpi: undefined symbol: lammps_extract_setting
Traceback (most recent call last):
File “”, line 9, in post_force_callback
File “/scratch/pmagnico/myenv/lib/python3.8/site-packages/lammps/core.py”, line 155, in init
self.lib.lammps_extract_setting.argtypes = [c_void_p, c_char_p]
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/ctypes/init.py”, line 386, in getattr
func = self.getitem(name)
File “/trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/python3.8/ctypes/init.py”, line 391, in getitem
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: lmp_mpi: undefined symbol: lammps_extract_setting
ERROR: Fix python/invoke post_force() method failed (…/fix_python_invoke.cpp:119)
Last command: run 250

Ok. I have been able to reproduce this. It is a consequence of using the statically linked LAMMPS executable. Unfortunately, my suggestion to use the statically linked binary to avoid the difficulty of finding the lammps shared library cannot be used in your use case.

To make this work, you need to delete the lmp_mpi binary and recompile/link it in shared mode. Then you need to copy liblammps_mpi.so to a more permanent location, e.g. into the lib folder inside your virtual environment and then extend the LD_LIBRARY_PATH environment variable to include that folder.

Let’s say your virtual environment is in $HOME/lammpsenv, then you would copy liblammps_mpi.so from lammps/src/ to $HOME/lammpsenv/lib and set:

export "LD_LIBRARY_PATH=$HOME/lammpsenv/lib:${LD_LIBRARY_PATH}"

From then on you should be able to do:

ldd ./lmp_mpi

and it should list $HOME/lammpsenv/lib/liblammps_mpi.so as one of the dependencies.

I copied the library in the environment folder and I extend the “LD_LIBRARY_PATH”. The command ldd ./lmp_mpi gives :

(myenv) [pmagnico@skylake047 TEST_LAMMPS_PYTHON_MPI]$ ldd ./lmp_mpi
linux-vdso.so.1 => (0x00002aaaaaacd000)

liblammps_mpi.so => /scratch/pmagnico/myenv/lib/liblammps_mpi.so (0x00002aaaaaccf000)

libpython3.8.so.1.0 => /trinity/shared/apps/tr17.10/x86_64/python3-3.8.6/lib/libpython3.8.so.1.0 (0x00002aaaab5ef000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002aaaabb97000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaabdce000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002aaaabfea000)
libutil.so.1 => /lib64/libutil.so.1 (0x00002aaaac1ee000)
libm.so.6 => /lib64/libm.so.6 (0x00002aaaac3f1000)
libmpi_cxx.so.40 => /trinity/shared/apps/tr17.10/x86_64/openmpi-gcc81-psm2-3.1.4/lib/libmpi_cxx.so.40 (0x00002aaaac6f3000)
libmpi.so.40 => /trinity/shared/apps/tr17.10/x86_64/openmpi-gcc81-psm2-3.1.4/lib/libmpi.so.40 (0x00002aaaac90e000)
libstdc++.so.6 => /trinity/shared/apps/custom/x86_64/gcc-8.1.0/lib64/libstdc++.so.6 (0x00002aaaacc15000)
libgcc_s.so.1 => /trinity/shared/apps/custom/x86_64/gcc-8.1.0/lib64/libgcc_s.so.1 (0x00002aaaacf99000)
libc.so.6 => /lib64/libc.so.6 (0x00002aaaad1b1000)
libfreebl3.so => /lib64/libfreebl3.so (0x00002aaaad57f000)
/lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)
libopen-rte.so.40 => /trinity/shared/apps/tr17.10/x86_64/openmpi-gcc81-psm2-3.1.4/lib/libopen-rte.so.40 (0x00002aaaad782000)
libopen-pal.so.40 => /trinity/shared/apps/tr17.10/x86_64/openmpi-gcc81-psm2-3.1.4/lib/libopen-pal.so.40 (0x00002aaaada38000)
librt.so.1 => /lib64/librt.so.1 (0x00002aaaadce2000)
libz.so.1 => /lib64/libz.so.1 (0x00002aaaadeea000)
libhwloc.so.5 => /trinity/shared/apps/tr17.10/x86_64/hwloc-1.11.8/lib/libhwloc.so.5 (0x00002aaaae100000)
libnuma.so.1 => /lib64/libnuma.so.1 (0x00002aaaae33e000)
libpciaccess.so.0 => /lib64/libpciaccess.so.0 (0x00002aaaae54a000)
libxml2.so.2 => /lib64/libxml2.so.2 (0x00002aaaae754000)
libevent-2.1.so.6 => /trinity/shared/apps/tr17.10/x86_64/libevent/2.1.8/lib/libevent-2.1.so.6 (0x00002aaaaeabe000)
libevent_pthreads-2.1.so.6 => /trinity/shared/apps/tr17.10/x86_64/libevent/2.1.8/lib/libevent_pthreads-2.1.so.6 (0x00002aaaaed12000)
libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00002aaaaef15000)
liblzma.so.5 => /lib64/liblzma.so.5 (0x00002aaaaf378000)

and the python example works. So it seems that the problem is solved.

I have question about the python function:

In the script in.fix_python_invoke,
we have the python command ’ python end_of_step_callback here “”" ’ and two defined functions named ‘end_of_step_callback’ and ‘post_force_callback’ and we have also two fixes ‘Fix 2 … end_of_step …’ and ‘Fix 3 … post_force …’. It is confusing.
I feel that the name of the python command and of the defined functions have no thing to do with the call_back and can be changed by what we want.
However, in this case why we have 1 argument in one case and 2 arguments in the other case ?
If in a python part, several functions are defined, they cannot be named ‘end_of_step_callback’ or ‘post_force_callback’.
In the chapter python command of the manual, the python command and the functions have the same name (pForce, factorial, loop …).
In the chapter fix python/invoke command, the fix id are character chains and in the script in.fix_python_invoke, it is an integer.

These are subject to the constraints and requirements of the python command.

Because that is how the corresponding methods in a fix are called. This interface is meant to allow prototyping fix commands. The Fix::post_force() method has “int vflag” as argument while Fix::end_of_step() has no arguments. This fix is emulating that and passing vflag when the python function is supposed to be called in the post_force() phase of a timestep, and for end_of_step, no argument is given.

First of all, I thank you for your help with the installation of lammps last week.

Now, I try to write results in a binary file by using a python function (see below - file = open(fichier,“ab”)). However, the file is written in ascii format. Same problem if I use the command open(fichier,“b”) and open(fichier,“wb”).

python function :

python end_of_step_callback here “”"

from future import print_function
from lammps import lammps

def end_of_step_callback(lmp):

try:

import os
import struct
 
pid = os.getpid()
fich_1 = 'resultat_'
fich_2 = str(pid)
fichier = fich_1+fich_2+'.dat'


L = lammps(ptr=lmp)
t = L.extract_global("ntimestep", 0)
print(pid_prefix, "### END_OF_STEP ###", t)

nlocal = L.extract_global("nlocal")

id    = L.extract_atom("id")
types = L.extract_atom("type")
x     = L.extract_atom("x")
v     = L.extract_atom("v")
c_dsp = L.extract_compute("dsp",1,2)

print("ouverture fichier binaire" , fichier)
file = open(fichier,"ab")

n_he = 0
for i in range(nlocal):
  if (types[i] == 5):
    n_he = n_he + 1   
    
valeur_t = " %d %d \n " % (t,n_he)
file.write(valeur_t)

for i in range(nlocal):
  if (types[i] == 5):
  		
   file.write( struct.pack("i",id[i]) )
   file.write( struct.pack("i",types[i]) )
   file.write( struct.pack("d",c_dsp[i][0]) )
   file.write( struct.pack("d",c_dsp[i][1]) )
   file.write( struct.pack("d",c_dsp[i][2]) )
   file.write( struct.pack("d",v[i][0]) )
   file.write( struct.pack("d",v[i][1]) )
   file.write( struct.pack("d",v[i][3]) )
    
file.close()

except Exception as e:
print(e)

“”"