Added lines becoming ineffective for multiple proc. run

Similar to move/ramp/random/rotate, I wanted to impose some particular wave
like displacement on the configuration. For that I am reading those
displacements from file and adding to x .

> Dear Lammps users,
>
> I am trying to add another style in displace_atoms command. So I edited

what "style"?

Similar to move/ramp/random/rotate, I wanted to impose some particular wave
like displacement on the configuration. For that I am reading those
displacements from file and adding to x .

when you read in those displacements, how do you distribute them to
the individual MPI tasks when running in parallel?

> displace_atoms.cpp and added these extra lines:
>
> double **dispvec = atom->dispvec;
> double **velvec = atom->velvec;
> for (i = 0; i < nlocal; i++) {
> if (mask[i] & groupbit) {
> x[i][0] += dispvec[i][0];
> x[i][1] += dispvec[i][1];
> x[i][2] += dispvec[i][2];
>
> v[i][0] = velvec[i][0];
> v[i][1] = velvec[i][1];
> v[i][2] = velvec[i][2];
> }
> }
>
> This modification becomes effective if only I run lammps with single
> processor. However, when I use multiple processors, this part of code
> becoming ineffective.
>
> Any comments on this apparent discrepancy?

it must be the result of your programming, but likely not at this
location but elsewhere. there is not enough information about what you
are trying to do anyway.

In the case of the run using 1 proc, every thing is alright and produced
expected result. But no effect at all in the case of run with multiple proc.

which means, that it is most likely that you don't have the proper
information available on the parallel processors.

I added the new added variables in atom.cpp n atom_vec_full.cpp along with x
, v , f . Is some where else the new added variables should be included to
become effective in multiple proc run (mpi environment)?

those are two different questions/problems.

for the first part. storing information that is used in a command for
a one time use in a permanent location in the atom class is a _very_
bad idea. the whole point about using C++ in lammps is to *avoid*
global data and promote encapsulation.

for the second part. it is *your* task to properly distribute the
information that you want to send to individual atoms across the
parallel environment and that is best done while reading it. it is
also not clear from the small code fragment how you are matching
global atom ids with local indexes.

overall, i don't think that your addition is well chosen. for as long
as you can represent your modification with a simple formula, i'd
rather use atom style variables. the velocity command already supports
this, so you'd only have to teach the displace_atoms move command to
use variables (how to do that, you can infer, for example, from the
velocity comand) and specifically in your case most likely atom style
variables. or alternatively, since you already read data from a file,
why don't you just write out a dump file, update positions and
velocities to your liking from a suitable self-writtenscript/program
and then use read_dump to feed that information back to lammps?

no parallel programming needed.

axel.

>>
>> > Dear Lammps users,
>> >
>> > I am trying to add another style in displace_atoms command. So I
edited
>>
>> what "style"?
>
> Similar to move/ramp/random/rotate, I wanted to impose some particular
wave
> like displacement on the configuration. For that I am reading those
> displacements from file and adding to x .

when you read in those displacements, how do you distribute them to
the individual MPI tasks when running in parallel?

Until now I haven't taken care of MPI tasks. I need your help on this

regard.

>>
>> > displace_atoms.cpp and added these extra lines:
>> >
>> > double **dispvec = atom->dispvec;
>> > double **velvec = atom->velvec;
>> > for (i = 0; i < nlocal; i++) {
>> > if (mask[i] & groupbit) {
>> > x[i][0] += dispvec[i][0];
>> > x[i][1] += dispvec[i][1];
>> > x[i][2] += dispvec[i][2];
>> >
>> > v[i][0] = velvec[i][0];
>> > v[i][1] = velvec[i][1];
>> > v[i][2] = velvec[i][2];
>> > }
>> > }
>> >
>> > This modification becomes effective if only I run lammps with single
>> > processor. However, when I use multiple processors, this part of code
>> > becoming ineffective.
>> >
>> > Any comments on this apparent discrepancy?
>>
>> it must be the result of your programming, but likely not at this
>> location but elsewhere. there is not enough information about what you
>> are trying to do anyway.
>
> In the case of the run using 1 proc, every thing is alright and produced
> expected result. But no effect at all in the case of run with multiple
proc.

which means, that it is most likely that you don't have the proper
information available on the parallel processors.

Yes

> I added the new added variables in atom.cpp n atom_vec_full.cpp along
with x
> , v , f . Is some where else the new added variables should be included
to
> become effective in multiple proc run (mpi environment)?

those are two different questions/problems.

for the first part. storing information that is used in a command for
a one time use in a permanent location in the atom class is a _very_
bad idea. the whole point about using C++ in lammps is to *avoid*
global data and promote encapsulation.

I can initialize those 2 new variables in displace_atoms.h to be
more efficient. I have chosen to declare in atom.cpp is due to problem with
array size while trying to initialize in displace_atoms.h

for the second part. it is *your* task to properly distribute the
information that you want to send to individual atoms across the
parallel environment and that is best done while reading it. it is
also not clear from the small code fragment how you are matching
global atom ids with local indexes.

I am reading displacements of *all* atoms (non-zero for displacing atoms,
zero for rest of the atoms). So I thought I avoided matching problem by
reading for all atoms. Please confirm that I am doing alright there.

overall, i don't think that your addition is well chosen. for as long
as you can represent your modification with a simple formula, i'd
rather use atom style variables. the velocity command already supports
this, so you'd only have to teach the displace_atoms move command to
use variables (how to do that, you can infer, for example, from the
velocity comand) and specifically in your case most likely atom style
variables. or alternatively, since you already read data from a file,
why don't you just write out a dump file, update positions and
velocities to your liking from a suitable self-writtenscript/program
and then use read_dump to feed that information back to lammps?

no parallel programming needed.

Initially I used the scheme same as you mentioned (write_restart-->adding
my wave displacements using matlab-->read_restart). However, since my
system is very highly minimized, The new run starting state thermo is not
equal to the end state of previous run. I found the significant loss of
precision due to binary to text. Then I started to read in directly.

This new style 'WAVE' of displace_atoms command could be a
potential addition to LAMMPS. This feature would be useful mostly to impose
displacements with any arbitrary user-defined profile. (User has to ready
with generated displacements in a file) and rarely to avoid change of state
before and after 'dump and read_dump' due to loss of precision (For highly
minimized configurations like in my case). If you are interested to look in
to it, I would like to mail the modified displace_atoms.cpp to you. The
only rest of thing to add is make it work for parallel environment. In any
case I need your help on distributing MPI task while reading in.

[...]

when you read in those displacements, how do you distribute them to
the individual MPI tasks when running in parallel?

Until now I haven't taken care of MPI tasks. I need your help on this
regard.

how can you even expect something to work in parallel, when you don't
consider the flow of data?

i don't have the time to teach people MPI or implement their code for
them. sorry.

[...]

I can initialize those 2 new variables in displace_atoms.h to be more
efficient. I have chosen to declare in atom.cpp is due to problem with array
size while trying to initialize in displace_atoms.h

i strongly recommend to read more of lammps's source code before doing
just random changes until they work (in part). if you want to do some
non-trivial changes, there is no alternative to knowing what you are
doing. there are likely pieces of code that do something close to what
you need, so it is mostly a matter of identifying them and adapting
them for your needs.

[...]

I am reading displacements of *all* atoms (non-zero for displacing atoms,
zero for rest of the atoms). So I thought I avoided matching problem by
reading for all atoms. Please confirm that I am doing alright there.

you are missing the point. the order of atoms in the atom->x and
atom->v arrays can be very different from the order of the atom ids.

[...]

Initially I used the scheme same as you mentioned (write_restart-->adding my
wave displacements using matlab-->read_restart). However, since my system is
very highly minimized, The new run starting state thermo is not equal to the
end state of previous run. I found the significant loss of precision due to
binary to text. Then I started to read in directly.

you are overwriting the velocities in your code. how can you expect
the thermodynamic data to be identical?
also, minimization beyond the ruggedness of the potential hypersurface
is a pointless exercise considering all the approximations and
truncation happening in classical models with cutoffs. please note
that i suggested using dump and read_dump, where you can define the
format string and thus the precision of the written coordinates. the
first option (using atom style variables) incurs no loss of precision
at all.

This new style 'WAVE' of displace_atoms command could be a potential
addition to LAMMPS. This feature would be useful mostly to impose
displacements with any arbitrary user-defined profile. (User has to ready
with generated displacements in a file) and rarely to avoid change of state

as i mentioned, teaching displace_atoms to use variables can handle
this in the cleanest and most elegant way. there are also "file" style
variables, that would read in any kind of information, in case you
can't easily compute it inside of LAMMPS.

before and after 'dump and read_dump' due to loss of precision (For highly
minimized configurations like in my case). If you are interested to look in
to it, I would like to mail the modified displace_atoms.cpp to you. The only
rest of thing to add is make it work for parallel environment. In any case I
need your help on distributing MPI task while reading in.

i only do work for other people when it interests me personally (and
this is not such a case) or when i get suitably compensated.

axel.