I am working on creating nearly cube-sized BCC or FCC metallic element blocks (e.g. Al, W, Cu, Pt) containing several thousand atoms of same element (triply periodic boundary conditions) with x axis pointing towards various crystallographic directions (e.g. [1 4 2]) that is equilibriated for a while (say 0.01 metals unit for 5000 steps) under npt ensemble to a predetermined targeted temperature with 1.0 damping coefficient. All metallic elements utilize eam/alloy potential. Then it is deformed along x axis with strain rate ranging from 0.001 to 0.1 (again metals unit) with timesteps ranging from 0.005 to 0.02 metals unit, for 5000 to 100000 timesteps. Throughout the straining, along with attempt to fix temperature, an npt ensemble is maintained with zero imposed strains one two directions normal to loading, both with unity damping coefficient. The target is to observer stress-strain-temperature behaviour.
I am using automatic neighbor list recalculation with 2.0 units of ghost atom cutoff distance. I plan to show thermodynamic data in CLI after each 100 to 200 steps
Now my question is
- Are these values of damping coefficients and timesteps realistic enough for interior of a large-enough single crystal, provided that the temperature is 0.1 to 0.9 times (kelvin) melting point of the elements? do I need to change damping coefficients and timesteps? what is a suitable tradeoff as cited and benchmarked in reputed literature and benchmark files?
- Often I equilibriate once, store the data in a restart file, and run the deformations for various strain rates at same temperature. Is this scenario anyways erase or truncate some important system configuration that reduces realistic-ness of data?