Hi everyone,
I faced a interesting issue about the GPU accurate simulation.
I have access to a NVIDIA GTX 1660 SUPER with Windows 10 as the current OS. Following the guide, the CUDA was linked to the Lammps. Via ocl_get_devices.exe, the information of the GPUs showed as follows:
Found 1 platform(s).
Using platform: NVIDIA Corporation NVIDIA CUDA OpenCL 1.2 CUDA 11.0.197
Platform 0:
Device 0: “GeForce GTX 1660 SUPER”
Type of device: GPU
Double precision support: Yes
Total amount of global memory: 6 GB
Number of compute units/multiprocessors: 22
Total amount of constant memory: 65536 bytes
Total amount of local/shared memory per block: 49152 bytes
Maximum group size (# of threads per block) 1024
Maximum item sizes (# threads for each dim) 1024 x 1024 x 64
Clock rate: 1.785 GHz
ECC support: No
Device fission into equal partitions: No
Device fission by counts: No
Device fission by affinity: No
Maximum subdevices from fission: 1
It seems the device was accepted by Lammps, my version is LAMMPS 64-bit 15Apr2020-MPI.
However, when I test the device by applying the in.mea located in the Benchmarks folder, some errors happened, see
WARNING on proc 0: Cannot open log.lammps for writing (…/lammps.cpp:407)
LAMMPS (15 Apr 2020)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (…/comm.cpp:94)
using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (72.3 72.3 72.3)
1 by 2 by 2 MPI processor grid
Created 32000 atoms
create_atoms CPU = 0.0006229 secs
Reading potential file Cu_u3.eam with DATE: 2007-06-11