I have been working on an artificial intelligence that aims to communicate with pictures,
The idea is to simulate presence on the level of what you feel looking into a dog's eyes, a modified Turing Test if you will.
Like in the Wizard of Oz or Frankenstein, I am looking for parts to make my character more real, and I'm thinking a molecular dynamics simulation might serve as an analog to a heart or some other gurgling organ. For this, I would need to pull data from a live simulation, and ideally introduce forces derived from the clicks of users on the web page.
How hard would this be to do with LAMMPS?
E.g. I used to maintain AMBER (which has open-sourced its more vanilla md program, sander), so I can imagine I'd have the code read from a socket or file at each step, and I'd perhaps want to build in geometrical analysis to quickly get, say, the instantaneous angle between two DNA bases. Then I'd need to figure how to map the user clicks and drags happening on the totally different photos on my web page to reasonable forces in this backroom simulation. (Naturally I'd want to watch the dynamics occurring as well, perhaps charging high prices to see behind the scenes :-.)
Since this is a general case of haptic feedback, I'm hoping I won't have to do it with AMBER, which is focused on batch execution. I don't expect to run big simulations, so ease of implementation outweighs speed.