Hi everyone! I’ve been trying to understand something that is probably really simple but I’ve managed to hopelessly confuse myself by overthing it. Please bear with me and my following stream of consciousness:
When using the lattice command I’m setting up a set of points in space. This is done by specifying a unit cell, in my case FCC with a scale factor, or lattice parameter, of 3.615 Angstrom. I also use the orient keyword to obtain a 111 surface. Then, throughout my script I use lattice units. After minimization I will be able to determine the true lattice parameter since the lattice parameter given in the lattice command is an approximation. So, the lattice parameter will effectively change but as far as I’m aware the scale factor in the lattice command remains unchanged. At the end of the script I convert some values from lattice units to distance units using the xlat, ylat and zlat parameters. But these parameters are determined from the lattice command and the “approximate” lattice constant I provide there. Since the lattice constant from the lattice command is only a scale factor, I assume that my conversions should be sound, but what confuses me is that the scale factor does have distance units associated with it. I’m almost sure that using xlat, ylat and zlat for these conversions should work, but I just can’t wrap my mind around the logic of it.
I hope I was able to convey the problem. I’m just struggling to visualize it and therefore verbalizing it is also challenging. I hope I don’t seem too ignorant.
Thank you for taking the time to consider my question!