two gpu jobs on same nodes

Dear Lammps users,
I am trying to run two simultaneous jobs using lammps compiled with gpu package. My system has 2 M2090 cards and 12 CPU cores. Two run separate jobs i have added following lines in input file
package gpu force/neigh 0 0 1 (job1)
package gpu force/neigh 1 1 -1 (job2)
As long as only one job is running, everything seems working fine. nvidia-smi gives 90% gpu utility. but as soon as i submit second job both jobs stop using gpu (gpu utility becomes 0% for both cards.) and performance becomes very low.
Is there a way to avoid this or i can run only one job at time on a node?
Thanks.

Mike might be able to answer whether this is even possible.
I assume you are using the -partition switch in LAMMPS
to setup 2 independent jobs?

The GPU can only be used by one core at a time, so at best
the 2 jobs will share the GPU back and forth.

Steve

I think what you are trying to do should work; another user on the list was able to do this successfully after we fixed an indexing issue. Are you running each job with 6 MPI processes?
Try running the second job with

package gpu force/neigh 1 1 1

instead. If this doesn't work can you send the output from nvc_get_devices? Thanks. - Mike