Project

General

Profile

Actions

Bug #1841

closed

vfscache panic when creating many links

Added by vsrinivas over 14 years ago. Updated about 14 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

This test program:

#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>

main() {
int i;
char id320 = {};

for (i = 0; i < 10000000; i++) {
sprintf(id, "%09d", i);
link("sin.c", id);
}
return 0;
}

Leads to this:: (on a vkernel with 64mb of ram):

panic: vfscache: malloc limit exceeded
mp_lock = 00000000; cpuid = 0
Trace beginning at frame 0x54ee5a10
panic(ffffffff,54ee5a38,55492c08,82d43e0,40400840) at 0x80e1d33
panic(8287563,829176f,4c8b7339,0,55492c08) at 0x80e1d33
kmalloc(a,82d43e0,2,0,54ee5bec) at 0x80df67c
cache_unlock(0,0,52b48d00,52ba4b00,40400000) at 0x812c274
cache_nlookup(54ee5bec,54ee5af4,54ee5bec,54ee5bec,40400000) at 0x81302ed
nlookup(54ee5bec,5503e4c8,54ee5c24,52a45540,5503e4c8) at 0x8138665
kern_link(54ee5c24,54ee5bec,552881d8,52ba4b00,526dc698) at 0x8141aa6
sys_link(54ee5c94,0,0,82c46cc,292) at 0x8147475
syscall2(54ee5d40,52a1dd40,0,0,54ee5d38) at 0x8265d6d
user_trap(54ee5d40,54e8bb88,82667bd,0,0) at 0x82660af
go_user(54ee5d38,0,0,7b,0) at 0x826663e
Debugger("panic")

CPU0 stopping CPUs: 0x00000000
stopped
Stopped at 0x826352d: movb $0,0x83f6194
db>

-- vs

Actions #1

Updated by dillon over 14 years ago

: link("sin.c", id);

There is no easy fix for this, the current vfs cache kinda depends
on the vnode limit to indirectly control its size and hard links
defeat the vnode limit (since one can have many hard links to the
same vnode).
-Matt
Actions #2

Updated by vsrinivas over 14 years ago

In the cache_nlookup() code, when we cache_alloc() a namecache structure:
2273 if (new_ncp == NULL) {
2274 spin_unlock(&nchpp->spin);
2275 new_ncp = cache_alloc(nlc->nlc_namelen);
2276 if (nlc->nlc_namelen) {
2277 bcopy(nlc->nlc_nameptr, new_ncp->nc_name,
2278 nlc->nlc_namelen);
2279 new_ncp->nc_name[nlc->nlc_namelen] = 0;
2280 }
2281 goto restart;
2282 }

We restart the lookup after the allocation; the restarted lookup is safe
against a null namecache ptr, the straightline path starts with a null ncp.
2226 new_ncp = NULL;
2227 nchpp = NCHHASH;
2228 restart:
2229 spin_lock(&nchpp->spin);

If we allow cache_alloc to return null, we could modify the lookup path to call
cache_cleanneg() to try to get back some space, at least from negative entries,
and then restart the lookup, at least a few times. AFAICS, there is no way to
clean out positive entries? This doesn't solve the problem, but it does put it
off somewhat.

More towards a solution, we could return an unresolved cache structure to the
caller if we cannot allocate an ncp; the callers of cache_nlookup (in particular
nlookup() itself seem to be safe against lookup failures, they just retry the
search...

Actions #3

Updated by dillon over 14 years ago

:If we allow cache_alloc to return null, we could modify the lookup path to call
:cache_cleanneg() to try to get back some space, at least from negative entries,
Negative hits aren't the problem. Having all the ncp's associated with
just a few vnodes means the normal vnode pressure will not clear them
out, because the vnodes are not under pressure.

We don't want the allocation in the loop to fail, we want it to
clean out some ncps prior to entering the loop I think.
-Matt
Actions #4

Updated by vsrinivas over 14 years ago

Okay - presumably we only want to clean out ncps if there is pressure on the
zone without vnode pressure. Would tracking the link : vnode ratio and cleaning
namecache entries before entering the loop based on that ratio be worth doing?
Currently are there interfaces to ask the namecache to clean itself?
(vnlrureclaim() seems brutal)...

Currently ncp->name is also allocated from the VFSCACHE zone, along with
namecache hashes; moving them to private zones would reduce load on the VFSCACHE
zone, and those zones would be implicitly limited by the vnode limit and
whatever limiting on ncps is there...

Actions #5

Updated by vsrinivas over 14 years ago

I've started working on a fix for this in the vfscache branch on my repository on
leaf...

Actions #6

Updated by vsrinivas about 14 years ago

Fixed by commit 9e10d70bff3c00c19a8b841dccbcd6aa29100793 to master; cleans out
positive and negative namecache entries via cache_hysteresis even on namecache
lookups.

What could be different?
- measure impact on nc hitrate
- see if its still possible to exhaust the zone, given huge filenames and lots
of links.

Actions

Also available in: Atom PDF