This is very weird. Why could it be normal in lammps but abnormal in python?
Problem 2
I used the following script for the assignment:
variable cur_step loop ${n_step}
label loop_start
run 1
variable FP delete
variable FP atomfile ${case_path}/csv/force_${cur_step}.csv
next cur_step
jump SELF loop_start
It ran as expected via the command lmp -in, but merely too slow. I guess the for_loop limited the performance, in which the per-atom values were deleted and then read at every timestep. Is there a faster and more efficient way to implement this function?
It is not weird but the consequence of documented behavior.
In general, looping in LAMMPS is based on reading the input from a file.
Please read the documentation of the “jump” command. The “SELF” keyword is a special case and only works in some cases. The normal version of this command would require to provide a file name and the “jump” command would close the current file, open the provided file name and then read it line by line until it encounters the “label loop_start” line.
Expecting this for a block of strings to work is not logical.
That said, if you upgrade to the 29 August 2024 version, this block of code should work, because extra code has been added for the special case of a self-contained block of strings that use the “SELF” pseudo file in the “jump” command.
What you are doing here makes no sense. Why reopen and read the file in every step? Why not read it outside the loop and be done?
@akohlmey
Thanks for your kind reply. I am now clear about Problem 1.
For Problem 2,
Why reopen and read the file in every step? Why not read it outside the loop and be done?
They are different files. At a certain timestep, I should open the corresponding file.
What I want to assign is a time-dependent force for each atom i, i.e., F(i, t).
To assign them via variable atomfile, I stored these values in N_\mathrm{step} csv files. At each timestep t_i, I read the corresponding forces from force_{step}.csv and assigned them to each atom.
You can speed up the process a little bit by combining the files into a single file and use the next command to loop through the data set, but the general overhead of reading and parsing a text file and broadcasting its information across all processors remains and may be dominant versus the computation of the forces.
The only way I can think to speed this up would be to not read forces from files, but compute them on-the-fly.