Re: [SLUG] 100k files in one directory

From: Jason Copenhaver (jcopenha@typedef.org)
Date: Tue Dec 11 2001 - 20:47:11 EST


you might not have to recompile.. I'm not sure.. but I think you can do
something like cat 100000 > /proc/sys/fs/file-max

maybe.. =)

On 11 Dec 2001, Logan wrote:

> Various folks in my office dumped just shy of 100K files to a linux
> file server I made for back ups. But, they dumped them all into one
> directory. There are so many files there that ls times out.
> I think the cure would be to recompile a kernel, editing
> /usr/src/linux/include/linux/fs.h to change the NR_FILE variable from
> 8192 to 8192*10 or even 8192*100. I think this would allow things like
> ls and rm to work. Or am I barking up the wrong tree?
> Has anyone else had any experience like this? I know I can also cure
> it by making more subdirectories and force them to put there files in
> the subdirectories. But chiding from the Windoze users stirs me to try
> to let them leave their crap all in one big heap.
>
> Logan
>

-- 
"Without documentation you can never know what the code was meant to do,
only what it does"
	-- Rik van Riel (#kernelnewbies)



This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 20:09:53 EDT