I was running a simple lj fluid simulation in NVT ensemble. Everything was going fine but suddenly I am unable to generate output of any kind. It is not just that the output file is coming out empty.
The log.lammps file that is generated during simulation is not generated at all. I am not sure where to look.
It is very hard to help you without more info on the commands you use but as a rule of thumb, in this kind of situation it is good to first make sure that the disk space given to your Linux partition is not filled. MD simulations are easily greedy in disk space and this is a common caveat.
If this is not the case then you should:
Provide the version of LAMMPS you’re using and the configuration of your wsl.
Provide the commands you used to launch LAMMPS with a minimum working example.
Maybe provide a screen output as well as error messages (if any) when LAMMPS is launched from the command line.
No log file is generated. The output.lmp file has no output or error messages.
I am not sure what is the configuration of wsl or how to get that information. Can you please tell me?
The WSL version I am using is 0.2.1
The following is the input file, in.lmp, that was working last. But it is not working now.
## created by chaitanya
## Simulation details
units lj
boundary p p p
atom_style full
## Defining and creating the system
variable side equal 8 ## Based on N=256
region simbox block 0.0 ${side} 0.0 ${side} 0.0 ${side}
create_box 1 simbox
lattice fcc 0.9 ## Reduced density
create_atoms 1 box
## Specifying mass (can be changed to change identity of molecule)
mass 1 1.0
variable TK equal 0.75 ## reduced temperature
velocity all create ${TK} 12345
## using lj interactions with tail addition
pair_style lj/cut 2.5 ## cut off is reduced units, i.e. 2.5*sigma/sigma
pair_modify tail yes
pair_coeff 1 1 1.0 1.0 2.5 ## cut off specification is optional
#minimize 1.0e-4 1.0e-6 100 1000 ## Yes required since starting from a random configuration
#reset_timestep 0
neighbor 0.3 bin ## proportional to box size
timestep 0.0001 ## The default value set for lj units is 0.005
fix TVSTAT all nvt temp ${TK} ${TK} 0.1
#fix 1 all nve
compute T all temp
fix avgT all ave/time 250 2000 500000 c_T file Tavg mode scalar
thermo 1000
thermo_style custom step cpu etotal ke pe evdwl elong temp press vol density atoms
run 500000
thermo 1000
thermo_style custom step cpu etotal ke pe evdwl temp press vol density atoms
unfix avgT
reset_timestep 0
variable P equal press
fix avgP all ave/time 500 2000 1000000 v_P file Pavg mode scalar
run 1000000
Thanks for the details. I cannot reproduce the issue with your input command and input file. So I suspect the problem comes from your Linux installation.
I am not sure what is the configuration of wsl or how to get that information. Can you please tell me? The WSL version I am using is 0.2.1
I was more interested in the Ubuntu version installed on the wsl partition and disk usage.
Couple more questions:
Does the output.lmp file contain anything? Is this the log file you are talking about. With your command there should be both output.lmp and log.lammps outputs at the end of a run.
What is the output of the command df -h?
By the way, if your lammps executable is compiled with mpi and used for parallel simulation, I would recommend taking the habit of using the format as @hothello did, that is with command line flags -i and -l (or -in and -log) since they ensure correct execution with mpirun or mpiexec.
Yes, according to my command I should have both the output.lmp file as well as log.lammps file. But the output.lmp file comes out empty. And no log.lammps file is generated.
The wsl has a ubuntu version 20.04 LTS.
I did try the command suggested by @hothello but it is all the same.
I think this rules out disk space problems since all your partition have some free space.
Last advice I can give are that I would not redirect output and ensure that your executable is correctly referenced in your command (both compiled and where the command looks for it). This would allow you to see if there is an error message from execution that you would miss.
Also building and testing the execution of a serial executable would make sense to see if this can come from MPI compilation/execution. I’ve at the moment no more insights to provide further than this. I’ve no idea where the issue can come from.