Lammps on Slurm different results for different tasks per node

I have two LAMMPs versions one from 2021 and another one from 2024. The reason for that is that in my group we do calculations on the Vienna Scientific Cluster (VSC) 4 and 5. I perform AEMD runs for MOF-74 to calculate the thermal conductivity in x-direction. When I run a molecular dynamics simulation I get two different results with different configurations dependent on the tasks per node on VSC4 and VSC5. On VSC5 more tasks per node namely 128 can be performed in comparison to the VSC4 where only 48 tasks per node can be performed. But when I state the same number of tasks per node the results are equal.
Does anyone have experience with that?
Here is my submission script for the VSC4:
#!/bin/bash

#SBATCH -J 10x1x6_Mg_MOF74_AEMD
#SBATCH -N 1
#SBATCH --time=24:00:00
#SBATCH --partition=skylake_0096
#SBATCH --qos skylake_0096
#SBATCH --ntasks-per-core=1
#SBATCH --ntasks-per-node=48
#SBATCH --output=output.vasp
#SBATCH --error=errors.vasp

echo "*** Starting job "$SLURM_JOB_ID $SLURM_ARRAY_TASK_ID SLURM_JOB_NAME " ***" echo "Start date: " (date)
echo "Start time: " $(date +%s)
echo "Nodes: " $SLURMD_NODENAME
echo "CPUs: " $SLURM_NTASKS

export OMP_NUM_THREADS=1

export MODULEPATH=/opt/sw/vsc4/VSC/Modules/TUWien:/opt/sw/vsc4/VSC/Modules/Intel/oneAPI:/opt/sw/vsc4/VSC/Modules/Parallel-Environment:/opt/sw/vsc4/VSC/Modules/Libraries:/opt/sw/vsc4/VSC/Modules/Compiler:/opt/sw/vsc4/VSC/Modules/Debugging-and-Profiling:/opt/sw/vsc4/VSC/Modules/Applications:/opt/sw/vsc4/VSC/Modules/p71545::/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen2:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen3

module purge
module load ncurses/6.2-gcc-11.2.0-p2bq5vo readline/8.1-gcc-11.2.0-3orznc6 libiconv/1.16-gcc-11.2.0-inwvwju tar/1.34-gcc-11.2.0-quetpvu amdscalapack/3.0-gcc-11.2.0-fgceavk gcc/11.2.0-gcc-11.2.0-5i4t2bo amdfftw/3.1-gcc-11.2.0-tbwhqad
module list

Execute your calculation:

mpirun /home/fs71791/martinklotz/lammps_with_mlip_interface_vsc4/mlip2/lmp_mpi -in AEMD.in &> out_AEMD_Mg_MOF74.log

echo "End date: " (date) echo "End time: " (date +%s)
echo "*** Finished job “$SLURM_JOB_ID $SLURM_ARRAY_TASK_ID $SLURM_JOB_NAME " ***”

and here for VSC5:

#!/bin/bash

#SBATCH -J 8x1x6_Mg_MOF74_AEMD
#SBATCH -N 1
#SBATCH --time=24:00:00
#SBATCH --partition=zen3_0512
#SBATCH --qos zen3_0512
#SBATCH --ntasks-per-core=1
#SBATCH --ntasks-per-node=48
#SBATCH --output=output.vasp
#SBATCH --error=errors.vasp

echo "*** Starting job "$SLURM_JOB_ID $SLURM_ARRAY_TASK_ID SLURM_JOB_NAME " ***" echo "Start date: " (date)
echo "Start time: " $(date +%s)
echo "Nodes: " $SLURMD_NODENAME
echo "CPUs: " $SLURM_NTASKS

export OMP_NUM_THREADS=1

export MODULEPATH=/opt/sw/vsc4/VSC/Modules/TUWien:/opt/sw/vsc4/VSC/Modules/Intel/oneAPI:/opt/sw/vsc4/VSC/Modules/Parallel-Environment:/opt/sw/vsc4/VSC/Modules/Libraries:/opt/sw/vsc4/VSC/Modules/Compiler:/opt/sw/vsc4/VSC/Modules/Debugging-and-Profiling:/opt/sw/vsc4/VSC/Modules/Applications:/opt/sw/vsc4/VSC/Modules/p71545::/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen2:/opt/sw/spack-0.17.1/var/spack/environments/zen3/modules/linux-almalinux8-zen3

module purge
module load ncurses/6.2-gcc-11.2.0-p2bq5vo readline/8.1-gcc-11.2.0-3orznc6 libiconv/1.16-gcc-11.2.0-inwvwju tar/1.34-gcc-11.2.0-quetpvu amdscalapack/3.0-gcc-11.2.0-fgceavk gcc/11.2.0-gcc-11.2.0-5i4t2bo amdfftw/3.1-gcc-11.2.0-tbwhqad
module list

Execute your calculation:

mpirun ~/lammp_vsc5_for_martin/interface-lammps-mlip-2_vsc5/lmp_vsc5 -in AEMD.in &> out_AEMD_Mg_MOF74_8x1x6.log

echo "End date: " (date) echo "End time: " (date +%s)
echo "*** Finished job “$SLURM_JOB_ID $SLURM_ARRAY_TASK_ID $SLURM_JOB_NAME " ***”
These are the input scripts for the simulations that return the same results. For the one with the different results the 48 in #SBATCH --ntasks-per-node=48 is changed to 128 for the VSC5 input script

Please have a look at this discussion.
It is also a good idea to read the forum rules, as your post looks rubbish.