It is very unlikely that a segmentation fault will be cause by an out-of-memory scenario these days and the fact that you can run with a single process confirms that. This rather hints at a design choice of the original ReaxFF code (in C): it was making guesses how large certain arrays would become during a run at the most. However, this assumption is only valid for pre-equilibrated bulk systems. When running in parallel and starting from a non-equilibrated configuration the changes can be larger than that and they are relatively larger the more processes are using in parallel. This is where the KOKKOS variant is to be preferred because it uses a more robust memory management approach.
That statement is nonsense. If anything, LAMMPS has become better at spotting situations where there is memory corruption or invalid settings are used and thus it will terminate with an error instead of continuing and producing bogus results.
Making such statements and not providing information about how more recent versions of LAMMPS are less stable to the LAMMPS developers is also very bad practice.
If the cause is the change of geometry, as I am speculating above, then changing the timestep will only delay the incident of the segfault as it will take 10x more timesteps to reach a situation where the geometry change requires changes in the data storage beyond the initially guess thresholds.