AI trained LAMMPSbot soon to help LAMMPS developers to respond to forum questions

Dear LAMMPS users,

Given the increasing popularity of LAMMPS and the resulting increase in questions here (we just cracked the 34000 new topic posts barrier), we have decided to create a specially trained bot, tentatively named LAMMPSbot, to assist with responding to forum questions. We expect that this will help freeing up time that is much needed for developing and maintaining LAMMPS, where neither AI has yet shown to produce code that is sufficiently applicable and maintainable, nor is there a sufficient interest from outside developers to join the LAMMPS core developer team and help with the effort.

The bot will be specifically trained to address questions where similar posts already exist in the archives and thus respond to questions by people that don’t read (or respect) the forum guidelines.

A second primary target for LAMMPSbot will be to respond to posts that are lacking context and necessary information (also a topic in the forum guidelines). Unlike humans, LLM based bots won’t have any problems to present their responses with confidence and thus put the main burden of having a meaningful and helpful discussion on the original poster instead of the person responding that has to ask for (many) more details (often multiple times).

At the moment, the main limitation is lack of access to suitably powerful GPUs to perform the training and to tweak the model settings to avoid having LAMMPSbot provide answers that are not for LAMMPS but for other MD software packages and to update the examples given for the current LAMMPS command syntax. They are extremely difficult to procure these days because a) they are insanely expensive to purchase and to operate (because of the power they consume) and it is very difficult to get funding for software maintenance instead of performing new research, and b) the big players in the field (Facebook, Google, Microsoft, etc.) are buying these GPUs like crazy and thus are drying up the supply (and they have deep pockets filled with lots of $$s that we don’t have and don’t mind to fill Nvidia’s pockets with them).

If you have some spare Nvidia H100 GPUs to give to the LAMMPS developers or want to donate some money for buying equipment you can send an e-mail to [email protected] announcing your intentions. We will get back to you on the details like shipping addresses and preferred accounts to send the money to. We have created one account in Switzerland and one on the Cayman Islands for that purpose. At the moment, most of the LAMMPS servers for testing, web pages, and downloads are hosted at Temple University in Philadelphia. The good news about this is that you won’t be subject to the export ban of the United States for the most potent AI/GPU hardware.

If you are a researcher working with LLMs and want to contribute effort to improving LAMMPSbot, we would also like to hear from you.

7 Likes

:joy: :joy: :joy: Nice April Fools!!! Can we at least start with making a template for posts like we have for Github issues? :grinning:

Feel free to contact the MatSci.org admins about setting this up. I don’t know how to do this for this kind of setup where you have multiple forums combined into one by using categories.

This sounds really useful, I support it.

do you specifically need H100s? would A100s be enough, or is there a specific feature of compute capability 9.0 you need ??

@alphataubio I don’t think the A100 has enough compute power for the LLM to eventually become sentient. A self-aware LAMMPSbot could give better answers to users, so we need H100 at least…

Did I just take the bait on an April’s Fool joke?

1 Like

Seems like it. :wink:

That said, we still take donations (cash or hardware to provide developers with proper gear for development and testing/debugging) and we still need helping hands.

Only we respect our users too much to delegate interacting with them to an AI, even though I had to learn today that some users depend on ChatGPT to formulate their questions (which may explain why they don’t make so much sense to me, or why users change their story about what they really want to know once I start asking questions).

If a question is made by ChatGPT, you should answer it with LAMMPSbot.

It would be like Spy vs. Spy

Too dangerous. Could create the equivalent of a “finger storm™”.