Hi all,
I recently got LAMMPS to work in GPUs and I wanted to test one of my systems to see how much performance updrage do I see. However, when I try to run the simulations on the GPU, I get the following error.
I have very little to no knowledge about GPU errors, could someone please help me understand why is error occurs and how to solve the problem?
I have attatched all the input files in the post. cpu.out
is the output file I get when I run the simulation on CPUs. I want to the test how fast the GPU are as compared the this CPU simulation.
We have RTX 3090 GPU
Device 0: NVIDIA GeForce RTX 3090, 82 CUs, 23/24 GB, 1.7 GHZ (Mixed Precision)
Error:
ERROR on proc 0: Insufficient memory on accelerator (src/GPU/pair_lj_cut_gpu.cpp:110)
Last command: minimize 0.0 0.0 1000 10000
Abort(1) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
Cuda driver error 4 in call at file '/home/vmahajan/softwares/lammps-gpu/lib/gpu/geryon/nvd_timer.h' in line 98.
Abort(-1) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
Cuda driver error 4 in call at file '/home/vmahajan/softwares/lammps-gpu/lib/gpu/geryon/nvd_timer.h' in line 98.
Abort(-1) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
"slurm-54484.out" 114L, 4856B
Link to the input files: test_gpu - Google Drive
Thank you.