I want to take this opportunity to announce the ECO-LAMMPS project which is planned to turn LAMMPS into an environmentally responsible software project.
In particular we plan to introduce the following changes for ECO-LAMMPS:
add to the summary of the CPU time and memory used also an estimate for the amount of energy used (based on information gathered from the /sys/ filesystem and lm_sensors).
print a warning when using particularly energy intensive functionality in LAMMPS like slow pair styles
print a warning when running LAMMPS inefficiently, e.g. with load imbalance or using features known to be inefficient
include a URL in the output using the estimated energy used by the simulation with an offer to buy carbon offset credits for that amount of energy.
add a command where you can input the percentage of “green” or “zero-offset” energy in the energy mix used by your computing facility
remove the current automated testing (which is very reliable, but also very redundant) and replace it with AI based tools like GitHub copilot and run them only once before merging.
I am eager to hear comments and additional suggestions for making LAMMPS more friendly to the environment.
It is difficult to do when you have only one day a year for such things.
But seriously, after significant experiments with various LLM bots, the current state of affairs is that the interaction with such a bot projects an impression not unlike talking to a beginner graduate student. We need to wait until technology has advanced at least to a postdoc level. If it can. Who would want answers from an overconfident but inexperienced student?
I propose that we implement a new integrator fix chatgpt to take advantage of AI’s incredible computing capabilities. Vibe coding produced the prototype attached below.
In the spirit of claiming tangible benefits for obscure theoretical refinements, it is worth noting that I tried using fix chatgpt on a cluster and found that a trajectory was produced using 99.999% fewer CPU cycles in the same walltime, representing huge energy and water savings!
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include <curl/curl.h>
#include <json/json.h> // Requires JSONCPP library
const std::string API_KEY = "your_openai_api_key";
const std::string MODEL = "gpt-4o"; // Latest model
struct Atom {
int id;
double x, y, z;
double vx, vy, vz;
};
// Read LAMMPS dump file and extract atomic positions and velocities
std::vector<Atom> read_lammps_dump(const std::string &filename) {
std::ifstream file(filename);
std::vector<Atom> atoms;
std::string line;
bool read_atoms = false;
while (std::getline(file, line)) {
if (line.find("ITEM: ATOMS") != std::string::npos) {
read_atoms = true;
continue;
}
if (read_atoms) {
std::istringstream iss(line);
Atom atom;
if (iss >> atom.id >> atom.x >> atom.y >> atom.z >> atom.vx >> atom.vy >> atom.vz) {
atoms.push_back(atom);
}
}
}
return atoms;
}
// Create a well-structured prompt for ChatGPT
std::string create_prompt(const std::vector<Atom> &atoms) {
std::ostringstream prompt;
prompt << "You are a molecular dynamics AI specialized in LAMMPS simulations. "
<< "Your task is to predict the atomic configuration at the next timestep based on Newtonian mechanics.\n"
<< "Conservation of momentum and realistic interactions should be considered.\n\n"
<< "The current atomic configuration is:\n"
<< "ID X Y Z VX VY VZ\n";
for (const auto &atom : atoms) {
prompt << atom.id << " " << atom.x << " " << atom.y << " " << atom.z << " "
<< atom.vx << " " << atom.vy << " " << atom.vz << "\n";
}
prompt << "\nPredict the new positions and velocities assuming small timestep Δt = 1 fs.\n"
<< "Format the response as a table with columns: ID X Y Z VX VY VZ.\n";
return prompt.str();
}
// Callback function for handling API response
size_t write_callback(void *ptr, size_t size, size_t nmemb, std::string *data) {
data->append((char *)ptr, size * nmemb);
return size * nmemb;
}
// Send request to OpenAI API
std::string query_openai(const std::string &prompt) {
CURL *curl;
CURLcode res;
std::string response;
curl_global_init(CURL_GLOBAL_ALL);
curl = curl_easy_init();
if (curl) {
std::string post_fields = R"({"model":")" + MODEL + R"(","messages":[{"role":"user","content":")" +
prompt + R"("}],"temperature":0.7})";
struct curl_slist *headers = nullptr;
headers = curl_slist_append(headers, "Content-Type: application/json");
headers = curl_slist_append(headers, ("Authorization: Bearer " + API_KEY).c_str());
curl_easy_setopt(curl, CURLOPT_URL, "https://api.openai.com/v1/chat/completions");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, post_fields.c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_callback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &response);
res = curl_easy_perform(curl);
if (res != CURLE_OK) {
std::cerr << "cURL request failed: " << curl_easy_strerror(res) << std::endl;
}
curl_easy_cleanup(curl);
curl_slist_free_all(headers);
}
curl_global_cleanup();
return response;
}
// Main function
int main() {
std::string filename = "dump.lammpstrj"; // Change to your LAMMPS dump file
std::vector<Atom> atoms = read_lammps_dump(filename);
if (atoms.empty()) {
std::cerr << "Failed to read LAMMPS dump file!" << std::endl;
return 1;
}
std::string prompt = create_prompt(atoms);
std::string response = query_openai(prompt);
std::cout << "Predicted next timestep configuration:\n" << response << std::endl;
return 0;
}