This is the mail archive of the
mailing list for the Cygwin project.
"du -b --files0-from=-" running out of memory
- From: Barry Kelly <bkelly dot ie at gmail dot com>
- To: Cygwin Mailing List <cygwin at cygwin dot com>
- Date: Sun, 23 Nov 2008 13:24:03 +0000
- Subject: "du -b --files0-from=-" running out of memory
I have a problem with du running out of memory.
I'm feeding it a list of null-separated file names via standard input,
to a command-line that looks like:
du -b --files0-from=-
The problem is that when du is run in this way, it leaks memory like a
sieve. I feed it about 4.7 million paths but eventually it falls over as
it hits the 32-bit address space limit.
Now, I can understand why a du -c might want to exclude excess hard
links to files, but that at most requires a hash table for device &
inode pairs - it's hard to see why 4.7 million entries would cause OOM -
and in any case, I'm not asking for a grand total.
Is there any other alternative to running e.g. xargs -0 du -b, possibly
with a high -n <arg> to xargs to limit memory leakage?
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html