I have used fix viscosity on a Ti-Al eam/alloy potential with pure Ti and on the Ni_u3.eam file that comes with LAMMPS, both at around 2000 K.
With Ti using NVE at equlibrium (no fix viscosity), I see an average drop in temperature of only 7e-5 K/ps in a 5 ns simulation. With fix viscosity, no thermostat, and swapping 1 pair of momenta every 50 steps, I see an average drop in temperature of around 5e-3 K/ps. Decreasing the swapping rate to every 500 steps leads to an average temperature drop of 5e-4 K/s, so the loss in energy is clearly related to the swapping.
With Ni, fix viscosity, no thermostat, and swapping 1 pair of atoms every 50 steps, I see an average increase in temperature of around 5e-3 K/ps. About the same rate as Ti, but in the opposite direction. The direction might just depend on the initial conditions.
The original paper on the method claims that it conserves energy. Should it be conserving energy to the same degree as an equilibrium simulation, or are the energy changes I am seeing normal?