Adding custom path to Singularity recipe for VTK and LAMMPS, RHEL 7 cluster

Correct me if I’m reading the docs incorrectly but unlike LIGGGHTS, there is no download and install option for VTK with LAMMPS. We do have LIGGHTS installed with VTK so I would like to build a Singularity container and take advantage of using:

vtk_SYSINC = -I/usr/include/vtk-6.0
vtk_SYSLIB = -lvtkCommonCore-6.0 -lvtkIOCore-6.0 -lvtkIOXML-6.0 -lvtkIOLegacy-6.0 -lvtkCommonDataModel-6.0

Anyone know how to use these with a Singularity recipe?

I’m testing Princeton’s example LAMMPS recipe here.

That is correct. VTK is such a huge package with many dependencies. We let others worry about that.

Why not consult the original LAMMPS documentation and the corresponding source code?
LAMMPS ships with a collection of singularity/apptainer definition files (which we use for building containers for integration testing on diverse Linux platforms).

All the Ubuntu definition files contain the packages that install VTK so it can be used with LAMMPS.
We would also recommend compiling/installing LAMMPS using CMake instead of the traditional make.

Nice I’m new to LAMMPS and was looking for something like this. And pardon the perhaps obvious-to-some-but-not-me question but where are the LAMMPS source files in the Ubuntu recipes, e.g., this one?

Yes the Princton recipe uses CMake, but I keep getting errors, e.g.,

/.post.script: line 46: -D: command not found
/.post.script: line 58: -L: command not found

When I tried to use the -D option which clearly not the correct way to set ENV vars like vtk_SYSINC.

They are not included. That would make not much sense. You create the container to have an environment that contains all requirements to build LAMMPS. Then you run the container to both build LAMMPS from source code as usual and run that executable as well. That way, you don’t “lock” your container to have only one specific version of LAMMPS included.

Sorry, but this all does not make much sense to me. It looks as if you are mixing up instructions and settings for building a container with instructions and setting for compiling LAMMPS. Those are two different things.

This is outdated (there are more recent LAMMPS versions) and will not include VTK because it a) doesn’t install VTK development libraries and headers and b) doesn’t enable the VTK package when compiling LAMMPS (and not much else either).
As mentioned in a different post, embedding a LAMMPS executable in a container is not very useful for an individual user. It would be most useful, if the container itself would be provided for download in a suitable repository for easy distribution. This, however, is problematic for MPI parallel applications, since the specific type and version of the MPI library inside the container must match the MPI library used outside the container.

Right, I’m still a little green with customizing Singularity container and it was me just trying things.

I was really hoping for a way to include and link to the VTK we have in a non standard path.

Yep found that out the hard way. Since we have modules that can load ad hoc I can match the OpenMPI or MPICH version in the container.

I’ll test the containers you linked and add the lammps stanzas from the other recipe and update the versions.

And thanks for replying on a Sunday night!

Why do you want to build a container at all? From your description it sounds like you can compile LAMMPS directly after loading the required environment modules.

As mentioned before, including LAMMPS in a container does not make sense unless you want to distribute it. Compiling and running LAMMPS without is much simpler. The LAMMPS manual has detailed instructions.

That is the reason for wanting a container. But perhaps I am going down the least efficient path. We have a user who would simply like with LAMMPS already compiled. That would mean having to install it on all compute nodes, and the -dev packages install and upgrade other packages that might break other users workflow.

I may just try using the VTK that comes available when you load the LIGGGHTS module.

It sounds like what you want might be achievable using Spack. You could create a Spack environment for LAMMPS, import as dependencies other packages that have already been installed (VTK, whichever MPI, whichever FFT, etc), and install (the few) remaining dependencies inside the Spack environment.

LAMMPS built inside the Spack environment will have access to the added dependencies after they are loaded with a Spack or LMod command – existing programs won’t see the new dependencies (unless the user for some reason loads the Spack environment).

Thanks we’d have to get approval to install that.

In testing the NVIDIA GPU containers I’m running into a library issue.

 singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker:// ./

Running Lennard Jones 8x4x8 example on 1 GPUS...
lmp: error while loading shared libraries: cannot open shared object file: No such file or directory

I’ve tried exporting some env vars in the file, via LD_PRELOAD and LD_LIBRARY_PATH and /usr/lib64/ does exists. Is there a way around this?

Spack doesn’t use admin privileges. You just need enough space (and free inodes).

I feel like you’re at the point of needing more help from your local sysadmin than from us. :slight_smile: