[lammps-users] Hybrid potential problem

Hello LAMMAS users and developers! In the course of my thesis, I need to model a metal substrate and a layer of liquid above it. I would like to describe the particles in a metal substrate (made of gold) by the potential of the EAM. Particles in the liquid layer are described by the Lennard Jones potential. The interaction of metal and liquid particles is described by the Lennard Jones potential. Obviously, for this task it is necessary to use the hybrid potential. With him I had a problem - the potential of Lennrad Jones begins to work differently in the hybrid potential. I will describe the problem on a primitive task. Here I simulate a cube with Argon particles, the interaction of which is described by the Lennard Jones potential. Two equivalent versions are considered for specifying the potential: 1 - the usual Lennard Jones potential and 2 - the hybrid potential, in which the EAM and the Lennard Jones potential are specified. If you look at the output in the terminal, you can see that in the second case we have large values ​​of pressure and potential energy. How can I get the same result for both cases? Be sure to leave EAM for gold.

Code for LAMMPS:

units metal
dimension 3
boundary p p p
atom_style atomic
variable Ar_mass equal 39.95 # [aem]
variable Au_mass equal 196.97 # [aem]
variable Ar_eps equal 0.0103 # epsilon Ar-Ar, [eV]
variable Ar_sigma equal 3.4 # sigma Ar-Ar, [A]
variable Ar_crit equal 3.5*${Ar_sigma} # cut-off distance, [A]
variable rho equal 1.41 # density [g/cm^3]

simulated box

region simbox block 0 50 0 50 0 50 side in units box
create_box 2 simbox

What LAMMPS version is this with?
I do get the expected output with the latest patch release on a Linux machine.
Please see the first parts of the two outputs for comparison below.

Axel.

Version 1:
LAMMPS (10 Feb 2021)
using 1 OpenMP thread(s) per MPI task
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (50.000000 50.000000 50.000000)
1 by 2 by 2 MPI processor grid
Created 2657 atoms
create_atoms CPU = 0.001 seconds
Neighbor list info …
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 13.9
ghost atom cutoff = 13.9
binsize = 6.95, bins = 8 8 8
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair lj/cut, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Setting up Verlet run …
Unit style : metal
Current step : 0
Time step : 0.001
Per MPI rank memory allocation (min/avg/max) = 3.253 | 3.253 | 3.254 Mbytes
Step Temp Press PotEng KinEng Density
0 0 3.7149257e+15 7.2458575e+13 0 1.4100918
1000 53.509142 3151.6289 -124.19659 18.370488 1.4100918
2000 34.563119 -420.19496 -168.04907 11.866035 1.4100918
3000 30.300482 -869.89949 -172.84445 10.402608 1.4100918
4000 28.82212 -1039.7064 -174.67656 9.8950646 1.4100918
5000 28.97829 -1101.0879 -175.62518 9.9486801 1.4100918
Loop time of 15.8173 on 4 procs for 5000 steps with 2657 atoms

Version 2:
LAMMPS (10 Feb 2021)
using 1 OpenMP thread(s) per MPI task
Created orthogonal box = (0.0000000 0.0000000 0.0000000) to (50.000000 50.000000 50.000000)
1 by 2 by 2 MPI processor grid
Reading eam potential file Au_u3.eam with DATE: 2007-06-11
Created 2657 atoms
create_atoms CPU = 0.001 seconds
Neighbor list info …
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 13.9
ghost atom cutoff = 13.9
binsize = 6.95, bins = 8 8 8
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair eam, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Setting up Verlet run …
Unit style : metal
Current step : 0
Time step : 0.001
Per MPI rank memory allocation (min/avg/max) = 4.765 | 4.766 | 4.767 Mbytes
Step Temp Press PotEng KinEng Density
0 0 3.7149257e+15 7.2458575e+13 0 1.4100918
1000 53.509142 3151.6289 -124.19659 18.370488 1.4100918
2000 34.563119 -420.19496 -168.04907 11.866035 1.4100918
3000 30.300482 -869.89949 -172.84445 10.402608 1.4100918
4000 28.82212 -1039.7064 -174.67656 9.8950646 1.4100918
5000 28.97829 -1101.0879 -175.62518 9.9486801 1.4100918
Loop time of 17.0233 on 4 procs for 5000 steps with 2657 atoms

Axel Kohlmeyer, thank you for fast answer! My version of LAMMPS is January 24 2020. I will try to update and report the results. Thanks for the help!

Hello, Axel Kohlmeyer! I updated LAMMPS and solved this problem. The problem was not the LAMMPS version, but in the launch with the GPU accelerator.
I ran like this (and used lj/cut/gpu in potentials):

I had the same problem with different results after updating. I was able to fix it by changing the startup method (mpirun -np 4 ~/lammps/src/lmp_mpi -in Hybrid_problem.in ) and removing the «gpu» after «lj/cut». And now everything is working correctly. Can I use GPU acceleration in the hybrid potential to describe the interaction potential of the Lennard-Jones?
Thanks again for your help!

please try:

mpirun -np 4 ~/lammps/src/lmp_mpi -sf gpu -pk gpu 1 neigh no -in Hybrid_problem.in

Problem solved, thanks a lot!

Problem solved, thanks a lot!

well, this is only a workaround. it requests to build the neighbor lists on the CPU and not on the GPU.
it is supposed to work without. I am still narrowing down the cause. when using the latest development version (which has much improved and modernized support for GPUs), i have been able to use a different workaround and construct neighbor lists on the GPU (and thus run even faster).

I will try to work with the GPU package developers to have this issue fully resolved in the next patch release version (due mid to later March).

Axel.