Dear LAMMPS users,
Given the increasing popularity of LAMMPS and the resulting increase in questions here (we just cracked the 34000 new topic posts barrier), we have decided to create a specially trained bot, tentatively named LAMMPSbot, to assist with responding to forum questions. We expect that this will help freeing up time that is much needed for developing and maintaining LAMMPS, where neither AI has yet shown to produce code that is sufficiently applicable and maintainable, nor is there a sufficient interest from outside developers to join the LAMMPS core developer team and help with the effort.
The bot will be specifically trained to address questions where similar posts already exist in the archives and thus respond to questions by people that don’t read (or respect) the forum guidelines.
A second primary target for LAMMPSbot will be to respond to posts that are lacking context and necessary information (also a topic in the forum guidelines). Unlike humans, LLM based bots won’t have any problems to present their responses with confidence and thus put the main burden of having a meaningful and helpful discussion on the original poster instead of the person responding that has to ask for (many) more details (often multiple times).
At the moment, the main limitation is lack of access to suitably powerful GPUs to perform the training and to tweak the model settings to avoid having LAMMPSbot provide answers that are not for LAMMPS but for other MD software packages and to update the examples given for the current LAMMPS command syntax. They are extremely difficult to procure these days because a) they are insanely expensive to purchase and to operate (because of the power they consume) and it is very difficult to get funding for software maintenance instead of performing new research, and b) the big players in the field (Facebook, Google, Microsoft, etc.) are buying these GPUs like crazy and thus are drying up the supply (and they have deep pockets filled with lots of $$s that we don’t have and don’t mind to fill Nvidia’s pockets with them).
If you have some spare Nvidia H100 GPUs to give to the LAMMPS developers or want to donate some money for buying equipment you can send an e-mail to [email protected] announcing your intentions. We will get back to you on the details like shipping addresses and preferred accounts to send the money to. We have created one account in Switzerland and one on the Cayman Islands for that purpose. At the moment, most of the LAMMPS servers for testing, web pages, and downloads are hosted at Temple University in Philadelphia. The good news about this is that you won’t be subject to the export ban of the United States for the most potent AI/GPU hardware.
If you are a researcher working with LLMs and want to contribute effort to improving LAMMPSbot, we would also like to hear from you.