Unusual printing error

Dear LAMMPS users

I am using LAMMPS version (22 Aug 2018). I am trying to print the pairwise forces for each pair of lj/cut and shift potential. I have added this printf statement in the pair_lj_cut.cpp code of LAMMPS.
if (rsq < cutsq[itype][jtype]) {
r2inv = 1.0/rsq;
r6inv = r2invr2invr2inv;
forcelj = r6inv * (lj1[itype][jtype]r6inv - lj2[itype][jtype]);
fpair = factor_lj

fprintf(stdout, “%4d %4d %12.6le %12.6le %12.6le\n”,i,j,delxfpair,delyfpair,delz*fpair); //edited by me…

f[i][0] += delxfpair;
f[i][1] += dely
f[i][2] += delz*fpair;

But I am getting an unusual output…

49 54 9.866806e-02 -3.548957e-02 -7.378500e-04
49 73 52 209 5.992228e-03 1.924005e-02 3.477053e-02
52 210 -1.962316e-03 5.538734e-03 5.712959e-02

63 253 -2.763190e-01 2.119301e-01 1.751214e-01
63 254 -7.322 45 207 -2.950140e-02 4.773966e-02 2.449487e-02
45 208 -3.005987e-01 4.716005e-02 1.840282e-01

Like this, there are multiple unusual output lines in the file. Can anyone please tell me where I am getting wrong and what could be the possible mistake done by me.

Did you build the lammps again after editing theq code?
Once you make any changes in the code you need to compile the lammps again to get a new executable.

are you running in parallel?
when output is generated from multiple parallel instances to the same file (or the screen), there is no guarantee that this is not getting corrupted or garbled. output is usually block buffered and with multiple MPI ranks, the output to the screen from other processes needs to be forwarded to MPI rank zero. there are two ways to get “clean” output: 1) write to a different file for each MPI rank or 2) insert a loop over all MPI ranks with MPI_Barrier() calls then output for the selected rank followed by fflush(). Option 2) will cause a massive slowdown.