Applying Ewald summation in one dimensional nanowires

Dear all,

I know that we cannot use one dimensional Ewald summation in LAMMPS. But can I simply set the periodic length in x and y directions (which are perpendicular to that along the nanowire) very large compared with the real size of the cross section of the nanowire? Here I only care about the properties in the direction along the nanowire. If this is applicable, another problem occurred: since the periodic length in x and y directions are very large, the simulation took much longer time compared with the one for the bulk structure strictly periodic in all three dimensions even with the same number of atoms in the simulated cell. Can anybody explain why this happened? Is there any way to decrease the simulation time? Thanks.

Best,
Jihong

For second issue, are you running jobs in parallel? If som, you might wanna use processors commands to slice the processors along nanowire directions, such as 1 1 n.

Best Regards,
Vikas

You can’t use any of the KSPACE solvers in LAMMPS with a 2d (or 1d) system.
So you must be running with dimesion = 3. So how did you setup
your problem? And how are you comparing to a 3d run you say is faster?

LAMMPS prints out the KSpace info, like # of grid points, or # of Kspace vectors.
What are those values for the 2 runs you are comparing? What is the fraction
of time spent in the KSpace solve?

Steve

Hi Vikas,

Thanks. Yes, I am running in parallel. But the problem is that it is the calculation in x and y directions that takes longer time, not the z direction, which is the direction of nanowires. And I don’t actually care that much about x and y directions. So if I use 1 1 n as the processor commands, it will not help that much. Am I right?

Best,
Jihong

Hi Steve,

Thanks. Yes, I am running in 3-D. For example, the actual size of the system I was using for simulating a nanowire is about 202040 (in Angstrom). To simulate a [001] nanowire, I set the periodic length in x and y directions as ~100 Angstroms, and the one in z direction as ~40 Angstrom. To simulate a 3-D bulk, I used a system with size of about 404040 (in Angstrom). I set the periodic length in all three dimensions as ~40 Angstroms. The Kspace vectors for the NW simulation is
KSpace vectors: actual max1d max3d = 53567 39 246519
The loop time is
Loop time of 71347.9 on 24 procs for 100000 steps with 896 atoms.

Pair time () = 17.0589 (0.0239095) Kspce time () = 11275.9 (15.8041)
Neigh time () = 0.00013567 (1.90153e-07) Comm time () = 2301.05 (3.22512)
Outpt time () = 43258.3 (60.6302) Other time () = 14495.6 (20.3168)

And for the 3-D bulk, it is
KSpace vectors: actual max1d max3d = 5756 14 12194.
Loop time of 2485.77 on 24 procs for 100000 steps with 2744 atoms

Pair time () = 85.0485 (3.42142) Kspce time () = 1915.43 (77.0558)
Neigh time () = 0 (0) Comm time () = 454.387 (18.2795)
Outpt time () = 21.9307 (0.882252) Other time () = 8.97364 (0.361001)

So how can I avoid this time-consuming scenario?

Thanks,
Jihong

Try it… :slight_smile: there is no harm in that… You dont have to run whole 100000 steps. Just 1000 steps will tell you the difference… In thermo_style command use spcpu keyword for quick estimate of how fast the simulations are running.

HTH

Best Regards,

Vikas

Hi Vikas,

Thanks. Yes, I am running in parallel. But the problem is that it is the
calculation in x and y directions that takes longer time, not the z
direction, which is the direction of nanowires.

Sorry, but this does not make sense at all. How do you know the
calculation in x and y directions are taking much longer time?

In addition to physically assign the processor domain as Vikas suggested,
you can also use "balance" and "fix balance" commands to evenly distribute
loads.

Ray

Hi Steve,

Thanks. Yes, I am running in 3-D. For example, the actual size of the
system I was using for simulating a nanowire is about 20*20*40 (in
Angstrom). To simulate a [001] nanowire, I set the periodic length in x and
y directions as ~100 Angstroms, and the one in z direction as ~40 Angstrom.
To simulate a 3-D bulk, I used a system with size of about 40*40*40 (in
Angstrom). I set the periodic length in all three dimensions as ~40
Angstroms. The Kspace vectors for the NW simulation is
KSpace vectors: actual max1d max3d = 53567 39 246519
The loop time is
Loop time of 71347.9 on 24 procs for 100000 steps with 896 atoms.

Pair time (\) = 17\.0589 \(0\.0239095\) Kspce time \() = 11275.9 (15.8041)
Neigh time (\) = 0\.00013567 \(1\.90153e\-07\) Comm time \() = 2301.05 (3.22512)
Outpt time (\) = 43258\.3 \(60\.6302\) Other time \() = 14495.6 (20.3168)

And for the 3-D bulk, it is
KSpace vectors: actual max1d max3d = 5756 14 12194.
Loop time of 2485.77 on 24 procs for 100000 steps with 2744 atoms

Pair time (\) = 85\.0485 \(3\.42142\) Kspce time \() = 1915.43 (77.0558)
Neigh time (\) = 0 \(0\) Comm time \() = 454.387 (18.2795)
Outpt time (\) = 21\.9307 \(0\.882252\) Other time \() = 8.97364 (0.361001)

Your kspace time only increased by a factor of 6, which is not too bad
considering the vacuum size. What is hurting you is the time in Output and
Other. What you are doing in your script other than the kspace and box
size are what really matters.

Ray

Dear all,

I know that we cannot use one dimensional Ewald summation in LAMMPS. But can

you could, if you programmed it. there is at least one publication
describing it as a colleague of mine grad school worked on
implementing it in a local code. ... and that was almost 20 years ago.

I simply set the periodic length in x and y directions (which are
perpendicular to that along the nanowire) very large compared with the real
size of the cross section of the nanowire? Here I only care about the
properties in the direction along the nanowire. If this is applicable,

it is not as simple as that. those periodic images are still coupled
on an infinite lattice in kspace and even a "large" distance will not
help since dipole charge interactions don't decay very fast. what you
would need to do is to apply a poisson solver that would cancel the
interaction between the periodic images in a similar fashion as the
slab correction does it for the z-direction when using pppm.

how much of an artifact you will see through treating a periodic
system as if it was nonperiodic is hard to say. but you ahve to
remember, that you do a lot of computation to represent basically a
vacuum.

for a not too large 1-d system, i would recommend to also try out
simple using a cutoff style coulomb style and very long coulomb
cutoff. the additional cost may be quite acceptable, since the number
of neighbors is not growing as quickly as in a 3d dense case. and you
only compute interactions that are actually there. using a properly
adapted domain decomposition and load-balancing is - of course - a
must, as was mentioned before.

axel.