I was planning to implement the GGA+U correction scheme to study some compounds containing transition metals. To choose the U-values, I read the following page:
However, I was confused about one thing. The page states that they have used the method outlined by Wang et al.'s work, and the reactions they analyzed in order to find the U-values are identical to that of which Wang et al. used. However, when looking at iron, the U-value listed on the page is 5.3 eV, as opposed to about 4.0 eV obtained in the work of Wang et al. This seems like a surprisingly large difference, given that the same methods were used. A few other elements have also have slightly different results, but are in much better agreement.
Is there a reason for this large discrepancy of the Fe U-values?
Any assistance or clarification would be greatly appreciated.
Hi, sorry for the delay in replying.
The transition into the modern VASP Input set was a bit before my time. Upon inspection of the VASP input set documentation, I believe the discrepancy comes from the difference in PAW potentials used between the old and new input sets. Note that the MIT input set (which corresponds to that derived from the Wang et al. paper) has the U value of 4.0 eV and uses the standard Fe potential, whereas the modern MPRelaxSet uses the Fe_pv potential.
Thank you very much for your reply, that clears things up for me. However, do you happen to know the reason that the Fe_pv potential is being used in the new input set? This seems to be at odds with the VASP manual, which recommends the standard Fe potential.
You can see abbreviated results of our PSP benchmarking study here. In the case of Fe, we choose the higher electron PSP primarily because it improves the estimated error with respect to experiment by around 0.15 eV per formula unit.
Okay, that makes sense. Thank you for the assistance!