Currently, in my C++ code I am calling LAMMPS as a shared library for updating the neighbor list and energy calculations each time an atom is moved. The issue I have is that LAMMPS is using all of the threads, hence disrupting my communication for multiple processes. Is there a way to run LAMMPS on a single thread for multiple MPI processes? Or is the best way to go is to have splitting of the threads for each MPI process? I have looked at 4.1. Basics of running LAMMPS — LAMMPS documentation, which indicates using OMP_NUM_THREADS = 2 in the Bash terminal, would this be a proper way of going forward?
Your description of what you are doing is rather confusing. You seem to be mixing up MPI processes and OpenMP threads. It is rather straightforward to compile LAMMPS without OpenMP support and then have it use only one thread. Also, even if OpenMP support is compiled in, LAMMPS will not follow the typical OpenMP behavior and occupy all available CPU cores with threads, but rather restricts itself to 1 thread unless the OMP_NUM_THREADS environment variable is set or the number of threads is set to a different value with a command or the command line.
However, if you are talking about MPI processes, then LAMMPS’ behavior depends on whether a) you have compiled in MPI support or created a serial library (using the provided STUBS MPI library) and b) how you create the LAMMPS instance. By default it will assuming using the MPI world communicator, but it is also possible to first split that communicator and then run LAMMPS only one a subset of the MPI processes. See the “simple” examples under
example/COUPLE for a minimal example on how to do that.
So, can you please provide some code sample(s) to resolve the ambiguity in your description?
Thank you for your detailed response! My apologies but I am fairly new to some of this but you were able to understand where I was getting at. So the simple.cpp code is exactly what I am looking at and wanting to accomplish, running LAMMPS only on a subset of the MPI processes.
I looked on the documentation for MPI_Comm_split(4 arg) and it seems to be along the lines of what I needed. I had one last question on it though. The key argument which has a value of 0 in the simple code if a unique value like “me”, from MPI_Comm_rank, and lammps was called using new LAMMPS(0,NULL,comm_lammps), would each MPI thread have a wholly unique lammps object associated with it?
This is more an MPI question than a LAMMPS question. MPI is a “share nothing” parallelization. That means each MPI process is an independent process (it may even run on a different computer when running on a cluster) and thus those are indeed completely independent objects.
For more details please refer to MPI tutorials and similar documents.
p.s.: even when using C++ as your programming language, you will likely find the C style library interface easier to use (it is certainly better documented).