Dose simulation results via GPU accelerator is accurate and accptable?

Dear all:
I have configured LAMMPS in my service with GPU package.
The performance is unexpected satisfied. It reduced half the time.
But a little confusion is that the result have some little deviation.

So I want to ask that in a long time simulation(unit real, timestep 0.5, run 5000000+), whether the final result is reliable?

my devices and configuration:
i7-6700K, GTX1070, Ubuntu 18.04LTS
-D GPU_API=cuda GPU_PREC=mixed, others are default

Thanks,
Roy

Dear all:
I have configured LAMMPS in my service with GPU package.
The performance is unexpected satisfied. It reduced half the time.
But a little confusion is that the result have some little deviation.
So I want to ask that in a long time simulation(unit real, timestep 0.5, run 5000000+), whether the final result is reliable?

this is not a yes/no question. the error varies and it depends on what
you are computing and how large the deviation is. it is not so much a
specific issue of GPUs, but rather of the floating-point precision.
some properties are more sensitive to doing calculations at lower
precision, than others where the errors from using a lower precision
can mostly cancel.

axel.

Well, I just looked through some paper and it not as simple as I used to think. I will have a try with other sample to verify my results. And the last question is there any principal to judge build-in method(like pair_style lj/cut/gpu etc. ) can be used with GPU. Or I have to know the exactly implement of this command.