Bug #1806

DFBSD 2.7.3 - mbuf exhausted while rsync to a NFS

Added by tuxillo almost 7 years ago. Updated almost 7 years ago.

Target version:
Start date:
Due date:
% Done:



I got two virtual machines running DFBSD. One is KVM (512MB mem) and the other
one is under VMware (1024MB).

kvm is the NFS server which is exporting /usr like this:
/usr -alldirs -maproot=root: -network ....

From the vmware I mount it, and start copying the repo using rsync:

# rsync -av -progress /usr/src /mnt/target/usr/

After a while the following warning appears in the kvm (NFS server):
Warning, objcache(mbuf): Exhausted!

# netstat -m
9056/9056 mbufs in use (current/max):
134/4528 mbuf clusters in use (current/max)
9190 mbufs and mbuf clusters allocated to data
2532 Kbytes allocated to network (22% of mb_map in use)
163 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

In the client part the copy stops:

24084480 10% 4.10MB/s 0:00:48
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken
pipe (32)
rsync: write failed on
RPC struct is bad (72)
rsync error: error in file IO (code 11) at receiver.c(302) [receiver=3.0.7]
[sender] io timeout after 30 seconds -- exiting
rsync error: timeout in data send/receive (code 30) at io.c(140) [sender=3.0.7]
[vmware] /usr/src>

And I can't even ssh from outside the kvm machine:
% ssh
's password:
Timeout, server not responding.


#1 Updated by dillon almost 7 years ago

Ok, this should be fixed now. nfs_realign() was calling m_copyback()
which was allocating the mbuf chain using normal mbufs instead of
cluster bufs, causing the normal mbufs to get blown out on machines
with low amounts of memory.


#2 Updated by tuxillo almost 7 years ago

Hi Matt,

As we agreed, I've uploaded the dump files of the panic that was produced in the
NFS client side. They are in my home dir: ~/crash/1806*.1

Antonio Huete

Also available in: Atom PDF