[lammps-users] about ERROR on proc 0: Cannot open fix ave/time file or Cannot open dump file

Hi, all,

I used the fix npt to do a MD simulation with the following setting:

Dangerous builds = 0
ERROR on proc 0: Cannot open fix ave/time file total920.rdf
***************************************************************

What I was confused is that why there occurred errors, such as " Cannot open
fix ave/time file" or "Cannot open dump file", at the middle stage of
simulation. Pls give me your solutions or comments. Thanks!

it is really hard to make suggestions when you disclose
only a minimal part of your input and it looks very irrelevant
to your issue, too.

there are a number of possible reasons:
- you are running out of disk space
- you are running out of inodes (directory entries)
- you are running out of disk quota
- you are running out of open file handles.

cheers,
     axel.

Dear all,

sorry if I'm necrobumping this (and for the naive question) but I'm
experiencing the same problem of
"Cannot open dump file"
that Xiong-Jun reported on his message on 5/16/10 on this mailing list

as I run out of open file handles.

Is there a way to dump a lot of data files without encountering this problem?
I have put an undump command for each dump instruction I put in my
script (should that help? It seems not to).
Should I just enlarge the number of open file descriptors allowed by my machine?

The script that generates this problem is fairly convoluted (includes, loops and
so forth), so I could try to strip it down, but that'd be nice if
someone was aware of the issue/found the questions above so trivial
that I don't need to :slight_smile:

Cheers,

Davide

As I recall, the default Linux limit on simultaneous open files for
a process is around 100. It's easy to test, just
write a simple program that opens files until
you get an error.

So if your script is doing this, you have no choice
but to modify the logic. Doing undumps should
help since it closes the file.

I believe the only way to boost the Linux limit is
to rebuild the (OS) kernel, but someone can correct
me if that's wrong.

Steve

Dear Steve,

thanks!

As I recall, the default Linux limit on simultaneous open files for
a process is around 100. It's easy to test, just
write a simple program that opens files until
you get an error.

If I got it right, the number can be checked by firing
$ ulimit -n

So if your script is doing this, you have no choice
but to modify the logic. Doing undumps should
help since it closes the file.
I believe the only way to boost the Linux limit is
to rebuild the (OS) kernel, but someone can correct
me if that's wrong.

As far as I understood one can change this by editing
/etc/security/limits.conf

(source http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/
)

I did it and it seems to work fine, even though it's just a quick and
dirty workaround.

Thanks again for your prompt response and have a nice day!

Steve

Davide

As I recall, the default Linux limit on simultaneous open files for
a process is around 100. It's easy to test, just
write a simple program that opens files until
you get an error.

the default for max number of open files
per user should be 1024, but sysadmins
on larger installations tend to reduce that.

check out: ulimit -n

So if your script is doing this, you have no choice
but to modify the logic. Doing undumps should
help since it closes the file.

I believe the only way to boost the Linux limit is
to rebuild the (OS) kernel, but someone can correct
me if that's wrong.

a lot of these things can be changed on linux
at run time, though the sysctl command. i don't
know for sure about the number of files. one has
to keep in mind, that there is a global max-open-files
limit and one per user.

cheers,
     axel.