Cuda package error

Hi lammps users,

I met an error when using cuda package. Do you have any suggestions? Thanks in advance.

CUDA: VerletCuda::setup: Allocate memory on device for maximum of 64644 atoms…

CUDA: Using precision: Global: 8 X: 8 V: 8 F: 8 PPPM: 8

Setting up run …

CUDA: VerletCuda::setup: Upload data…

CUDA: Total Device Memory useage post setup: 418.695312 MB

Memory usage per processor = 53.9738 Mbytes

Step Temp TotEng Dt Elapsed 4

0 299.58522 -268679.44 0.001 0 0

WARNING: # CUDA: You asked for a Verlet integration using Cuda, but several fixes have not yet been ported to Cuda.

This can cause a severe speed penalty due to frequent data synchronization between host and GPU. (verlet_cuda.cpp:553)

[mez508:27117] *** Process received signal ***

[mez508:27117] Signal: Segmentation fault (11)

[mez508:27117] Signal code: Invalid permissions (2)

[mez508:27117] Failing at address: 0x3ae1f0200

[mez508:27117] [ 0] /lib64/libpthread.so.0() [0x3c5760f4a0]

[mez508:27117] [ 1] lmp_linux(_ZN9LAMMPS_NS13ComputePEAtom19unpack_reverse_commEiPiPd+0x23) [0x6219e1]

[mez508:27117] [ 2] lmp_linux(_ZN9LAMMPS_NS4Comm20reverse_comm_computeEPNS_7ComputeE+0x132) [0x607f2a]

[mez508:27117] [ 3] lmp_linux(_ZN9LAMMPS_NS13ComputePEAtom15compute_peratomEv+0x2ed) [0x621d31]

[mez508:27117] [ 4] lmp_linux(_ZN9LAMMPS_NS10FixAveAtom11end_of_stepEv+0x27d) [0x6c00af]

[mez508:27117] [ 5] lmp_linux(_ZN9LAMMPS_NS10ModifyCuda11end_of_stepEv+0x153) [0x7e26dd]

[mez508:27117] [ 6] lmp_linux(_ZN9LAMMPS_NS10VerletCuda3runEi+0x1534) [0xb4c0e4]

[mez508:27117] [ 7] lmp_linux(_ZN9LAMMPS_NS3Run7commandEiPPc+0x752) [0xb25224]

[mez508:27117] [ 8] lmp_linux(_ZN9LAMMPS_NS5Input15execute_commandEv+0xe38) [0x7be24e]

[mez508:27117] [ 9] lmp_linux(_ZN9LAMMPS_NS5Input4fileEv+0x28b) [0x7be75f]

[mez508:27117] [10] lmp_linux(main+0x5b) [0x7c8fde]

[mez508:27117] [11] /lib64/libc.so.6(__libc_start_main+0xfd) [0x33a121ecdd]

[mez508:27117] [12] lmp_linux() [0x574089]

[mez508:27117] *** End of error message ***

Segmentation fault (core dumped)

Best Regards

Richard

If you post a (simple) input script that illustrates
the problem, Christian can take a look at it.

Steve

If you post a (simple) input script that illustrates the problem, Christian
can take a look at it.

Steve

Hi Steve,

Thanks for your suggestion. It would be very nice if Christian could help.
The problem is that the simulation can run although there is the WARNING (#
CUDA: You asked for a Verlet integration using Cuda, but several fixes have
not yet been ported to Cuda.) but it will stop after running sometime, maybe
1000000 timesteps or longer.
We use a -sf cuda command to start gpu. The input is as follow:

#initialization
units metal
boundary p p p
dimension 3
echo screen
newton on
atom_style atomic
log log.si_temp

#Atom definitions ####################################
read_data data.s4_238

#force field setting
pair_style sw
pair_coeff * * Si.sw Si
neighbor 1.0 bin
neigh_modify delay 0 every 20 check no
mass 1 28.0
thermo 700

#npt equillibration
velocity all create 300 429349 dist gaussian
fix 1 all npt temp 300 300 10 iso 0 0 100

timestep 0.001
run 300000

#nve equillibration
unfix 1
fix 2 all nve
restart 400000 tmp.restart
run 100000
restart 0

#muller-plathe thermal conductivity

#Temp gradient
compute ke all ke/atom
compute pe all pe/atom
compute cord all property/atom xu yu zu
variable temp atom c_ke/(1.5*8.617*10^-5)
reset_timestep 0

#Flux
fix 4 all thermal/conductivity 700 z 100 swap 1
#dump 1 all xyz 100 dump.xyz

thermo_style custom step temp etotal dt elapsed f_4

log log.si_flux

fix 5 all ave/spatial 10 100000 1000000 z lower 0.01 v_temp file
tmp2.profile units reduced

#Output every 1000000 average ke pe temp
fix pu1 all ave/atom 10 100000 1000000 c_cord[1] c_cord[2] c_cord[3]
fix ke1 all ave/atom 10 100000 1000000 c_ke
fix pe1 all ave/atom 10 100000 1000000 c_pe
fix temp1 all ave/atom 10 100000 1000000 v_temp
dump ke1 all custom 1000000 dump.ke1 id type f_pu1[1] f_pu1[2] f_pu1[3]
f_ke1 f_pe1 f_temp1
dump_modify ke1 sort id

restart 1000000 tmp.restart
run 4000000

I'm CCing Christian on this message, to see
if he has ideas.

Steve