[lammps-users] Aspherical particles simulated as granular particles and conversion b/w units?

HI,

I need to know - how to simulate hertzian like contact mechanics for aspherical particles (ellipsoids). Will just altering the shape of the particles and using the normal commands for granular particles work (s.a. pair_style gran/hertz/history and fix nve/sphere)?

Also I am slightly confused about the way one deals with units in the LJ. Using mock values on example codes I have become comfortable with LJ units and I just wanna stick to them, but I am just not sure how to scale them to actual required units. My understanding is that at the granular scale the mass is normalized by the mass of the particle and the lengths are normalized by the diameter of the particle. So, if one needs to depict a velocity of say 0.1 m/sec, then vel* = 0.1 * sqrt(d(particle)/g(9.81-accel due to gravity))/d(particle) = velocity in lj units???

Ripudaman Manchanda

I need to know - how to simulate hertzian like contact mechanics for aspherical particles (ellipsoids). Will just altering the shape >of the particles and using the normal commands for granular particles work (s.a. pair_style gran/hertz/history and fix nve/sphere)?

No - the granular potentials are for spheroids. Someone would have to
write a granular
potential for ellipsoids, a non-trivial task, especially given that
the distance-of-closest-approach for
ellipsoids is a tricky, inaccurate calculation, and granular
potentials are contact potentials.

Re: LJ units - there is nothing special about granular systems in LJ.
So any book that
discusses LJ units can be applied.

Steve

Hi fellow users,

I have been using for LAMMPS for over a month now. I was earlier using it on
my LINUX box with MPICH. Now I have moved to the cluster and the only MPI
available is OpenMPI. I am using the latest version of Openmpi -1.3.3.

I was able to succesfully make the LAMMPS executible for the system that
worked very well for the example problems obstacle and pour. But when I try
using it for my own simulation script. It fails and gives some weird errors.
I have included the output on the screen inline below. I have included two
error segments. One is because of the velocity command and the other is
because of the run command. All the other commands run just fine. I have
also included my script file along with the errors.Can anybody help me with
these errors? Note that these files worked just fine when I was using my own
linux box. Also currently I am working on just one node on the cluster.

Script File :

units lj
dimension 3
boundary f f f
#processors 2 2 1

atom_style hybrid granular dpd

neighbor 0.3 bin
neigh_modify delay 0
newton off

lattice fcc 1.6
region simbox block -0.5 24.5 -0.5 24.5 -0.5 9.5 units box
create_box 1 simbox
create_atoms 1 box
mass 1 1.0
group grains type 1
set group grains diameter 0.5 density 1.91

compute T grains temp/sphere
compute P all pressure T virial

thermo 10000
thermo_style custom step atoms temp vol pxx pyy pzz
thermo_modify lost ignore press P

pair_style gran/hertz/history 40000 NULL 50.0 NULL 0.5 1
pair_coeff * *

fix 1 all nve/sphere
fix 10x all wall/gran 40000 NULL 50.0 NULL 0.5 1 xplane -0.5
24.5
fix 10y all wall/gran 40000 NULL 50.0 NULL 0.5 1 yplane -0.5
24.5
fix 10z all wall/gran 40000 NULL 50.0 NULL 0.5 1 zplane -0.5 9.5

velocity grains create 0.30 38991345 temp T

dump 1 all xyz 10000 dump1.xyz
restart 1000000 random.restart
timestep 1e-6
#run 1000000

#set group grains diameter 1.0
#run 1000000

Errors with the velocity command

LAMMPS (9 Jan 2009)
Lattice spacing in x,y,z = 1.35721 1.35721 1.35721
Created orthogonal box = (-0.5 -0.5 -0.5) to (24.5 24.5 9.5)
  1 by 1 by 1 processor grid
Created 9583 atoms
9583 atoms in group grains
Setting atom values ...
  9583 settings made for diameter
  9583 settings made for density
[dacspr:13278] *** Process received signal ***
[dacspr:13278] Signal: Segmentation fault (11)
[dacspr:13278] Signal code: Address not mapped (1)
[dacspr:13278] Failing at address: 0x4
[dacspr:13278] [ 0] /lib/tls/libpthread.so.0 [0x559a90]
[dacspr:13278] [ 1] lmp_sparenode(_ZN9LAMMPS_NS8Velocity6createEiPPc+0x132)
[0x8209a0c]
[dacspr:13278] [ 2] lmp_sparenode(_ZN9LAMMPS_NS8Velocity7commandEiPPc+0x34a)
[0x8209844]
[dacspr:13278] [ 3]
lmp_sparenode(_ZN9LAMMPS_NS5Input15execute_commandEv+0x13e5) [0x8160577]
[dacspr:13278] [ 4] lmp_sparenode(_ZN9LAMMPS_NS5Input4fileEv+0x282)
[0x815eb68]
[dacspr:13278] [ 5] lmp_sparenode(main+0x5a) [0x8168308]
[dacspr:13278] [ 6] /lib/tls/libc.so.6(__libc_start_main+0xd3) [0x316de3]
[dacspr:13278] [ 7] lmp_sparenode(__gxx_personality_v0+0xa1) [0x8090829]
[dacspr:13278] *** End of error message ***

in.1 (1004 Bytes)

If you're running on a single proc, I would try building with
the STUBS lib version of MPI, via Makefile.serial or the like.
If that runs, then the problem is the MPI you are linking to.
If it has the same problem, then you need to debug what
is different about this box/compiler than your old one.

Steve