Project

General

Profile

Actions

Bug #235

closed

ffs_blkfree panic + lockmgr panic

Added by corecode almost 18 years ago. Updated over 17 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

hey,

this is the Nth time now i'm running into free frag or free block panics. either vinum is doing something very wrong or it is just interacting with ffs on a bad way. anyways, here it comes:

Unread portion of the kernel message buffer:
dev = #ad/0x20003, block = 12332, fs = /var
panic: ffs_blkfree: freeing free frag

syncing disks... Warning: vfsync_bp skipping dirty buffer 0xc244b200
13 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 Warning: vfsync_bp skipping dirty buffer 0xc244b200
1 panic: lockmgr: locking against myself

#2 0xc0178de5 in panic (fmt=0xc02cb0e8 "lockmgr: locking against myself")
at /usr/build/src/sys/kern/kern_shutdown.c:684
#3 0xc016ef15 in lockmgr (lkp=0xc244b318, flags=33554466)
at /usr/build/src/sys/kern/kern_lock.c:353
#4 0xc01b1b29 in getblk (vp=0xce304600, loffset=Unhandled dwarf expression opcode 0x93
) at buf2.h:92
#5 0xc01af720 in bread (vp=0xce304600, loffset=Unhandled dwarf expression opcode 0x93
) at /usr/build/src/sys/kern/vfs_bio.c:617
#6 0xc022540c in ffs_freefile (pvp=0x0, ino=21339, mode=33188)
at /usr/build/src/sys/vfs/ufs/ffs_alloc.c:1686
#7 0xc022ba7f in handle_workitem_freefile (freefile=0xc21e29c0)
at /usr/build/src/sys/vfs/ufs/ffs_softdep.c:3039
#8 0xc0228454 in process_worklist_item (matchmnt=0x0, flags=0)
at /usr/build/src/sys/vfs/ufs/ffs_softdep.c:741
#9 0xc022821b in softdep_process_worklist (matchmnt=0x0)
at /usr/build/src/sys/vfs/ufs/ffs_softdep.c:626
#10 0xc017869e in boot (howto=256) at /usr/build/src/sys/kern/kern_shutdown.c:295
#11 0xc0178de5 in panic (fmt=0xc02d5fe8 "ffs_blkfree: freeing free frag")
at /usr/build/src/sys/kern/kern_shutdown.c:684
#12 0xc0224ea6 in ffs_blkfree (ip=0xd2557c00, bno=12328, size=4)
at /usr/build/src/sys/vfs/ufs/ffs_alloc.c:1572
#13 0xc022a87f in handle_workitem_freeblocks (freeblks=0xc2327308)
at /usr/build/src/sys/vfs/ufs/ffs_softdep.c:2268
#14 0xc0228422 in process_worklist_item (matchmnt=0x0, flags=0)
at /usr/build/src/sys/vfs/ufs/ffs_softdep.c:727
#15 0xc022821b in softdep_process_worklist (matchmnt=0x0)
at /usr/build/src/sys/vfs/ufs/ffs_softdep.c:626
#16 0xc01c441c in sched_sync () at /usr/build/src/sys/kern/vfs_sync.c:244
#17 0xc016dd55 in kthread_create_stk (func=0, arg=0x0, tdp=0xce2ec500, stksize=0, fmt=0x0)
at /usr/build/src/sys/kern/kern_kthread.c:102

for some reason it is always /dev

cheers
simon

Actions #1

Updated by dillon almost 18 years ago

:hey,
:
:this is the Nth time now i'm running into free frag or free block panics.=
: either vinum is doing something very wrong or it is just interacting wi=
:th ffs on a bad way. anyways, here it comes:

It has got to be vinum.  This particular bug only occurs very
occassionally 'in the wild'.
Is it possible to narrow it down to a particular vinum
configuration? e.g. mirroring vs RAID-5 vs concat ?
-Matt
Matthew Dillon
<>
Actions #2

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

:hey,
:
:this is the Nth time now i'm running into free frag or free block panics.=
: either vinum is doing something very wrong or it is just interacting wi=
:th ffs on a bad way. anyways, here it comes:

It has got to be vinum. This particular bug only occurs very
occassionally 'in the wild'.

Is it possible to narrow it down to a particular vinum
configuration? e.g. mirroring vs RAID-5 vs concat ?

there is no fs on vinum yet. i was just running a raid-5 init and a mirroring rebuild (but on a differend hdd!)

cheers
simon

Actions #3

Updated by dillon almost 18 years ago

:there is no fs on vinum yet. i was just running a raid-5 init and a mirr=
:oring rebuild (but on a differend hdd!)
:
:cheers
: simon

Also, is the disk vinum is messing around with on the same controller
or cable as the one that blew up?
-Matt
Actions #4

Updated by dillon almost 18 years ago

:..
:> occassionally 'in the wild'.
:>=20
:> Is it possible to narrow it down to a particular vinum
:> configuration? e.g. mirroring vs RAID-5 vs concat ?
:
:there is no fs on vinum yet. i was just running a raid-5 init and a mirr=
:oring rebuild (but on a differend hdd!)
:
:cheers
: simon

Ok, try this... see if it occurs without vinum, but with other nominal
disk loading.
It could be that vinum is screwing up the geteblk or getpbuf buffers it
obtains, and that is causing other filesystems to blow up.
-Matt
Actions #5

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

:there is no fs on vinum yet. i was just running a raid-5 init and a mirr=
:oring rebuild (but on a differend hdd!)
Also, is the disk vinum is messing around with on the same controller
or cable as the one that blew up?

no, one is ad0 on the ata part of the chipset, the other ad4 on the sata part.

cheers
simon

Actions #6

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

:> occassionally 'in the wild'.
:>=20
:> Is it possible to narrow it down to a particular vinum
:> configuration? e.g. mirroring vs RAID-5 vs concat ?
:
:there is no fs on vinum yet. i was just running a raid-5 init and a mirr=
:oring rebuild (but on a differend hdd!)
Ok, try this... see if it occurs without vinum, but with other nominal
disk loading.

It could be that vinum is screwing up the geteblk or getpbuf buffers it
obtains, and that is causing other filesystems to blow up.

I suspect that vinum drains lots of blks and makes other parts of the system to flush out buffer sooner or something like that. is that possible?

cheers
simon

Actions #7

Updated by dillon almost 18 years ago

:I suspect that vinum drains lots of blks and makes other parts of the sys=
:tem to flush out buffer sooner or something like that. is that possible?=
:
:cheers
: simon

All things are possible.  vinum is probably doing the I/O's for
each disk in parallel. What I like about this bug is that you seem
to be able to reproduce it in fairly short order.
Could you post your vinum config and the vinum commands you are
using to init it ?
-Matt
Actions #8

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

:I suspect that vinum drains lots of blks and makes other parts of the sys=
:tem to flush out buffer sooner or something like that. is that possible?=
:
:cheers
: simon

All things are possible. vinum is probably doing the I/O's for
each disk in parallel. What I like about this bug is that you seem
to be able to reproduce it in fairly short order.

too bad it is my desktop system with 1GB of RAM... :)

Could you post your vinum config and the vinum commands you are
using to init it ?

sure, here you go:

drive v0 device /dev/ad4s1l
drive v1 device /dev/ad4s1m
drive v2 device /dev/ad4s1n
drive v3 device /dev/ad4s1o

volume test1
plex org striped 256s
sd drive v0 len 3g
sd drive v1 len 3g
plex org striped 256s
sd drive v2 len 3g
sd drive v3 len 3g

volume test2
plex org concat
sd drive v0 len 3g
sd drive v1 len 3g

volume test3
plex org raid5 256s
sd drive v0 len 3g
sd drive v1 len 3g
sd drive v2 len 3g
sd drive v3 len 3g

vinum commands:
create vinum.conf
init test3.p0
start test1.p1

(lots of hdd traffic now)

cheers
simon

Actions #9

Updated by dillon almost 18 years ago

So far I haven't had any luck reproducing the blkfree panic. I'm doing a
parallel buildworld and vinum init in a loop.

I did find at least one issue with the patch set... you have to use
getpbuf() rather then trypbuf(). You can't have temporary resource
failures in the places where trypbuf() is being used. This is unrelated
to the bug, though.
If you could get a crash dump of the blkfree panic onto leaf I'd like 
to take a look at it.
-Matt
Actions #10

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

So far I haven't had any luck reproducing the blkfree panic. I'm doing a
parallel buildworld and vinum init in a loop.

I could not reproduce it either. I remembered something else:

Usually I did a
sync
mount -a -u -o ro

before fiddling with vinum, but /dev never re-mounted ro because of open files. The other file systems though did, but nevertheless showed inconsistencies in fsck -f afterwards. I don't know if this might be connected.

I did find at least one issue with the patch set... you have to use
getpbuf() rather then trypbuf(). You can't have temporary resource
failures in the places where trypbuf() is being used. This is unrelated
to the bug, though.

I am not sure that I understand what you mean: The original (and now used) code checks the result of geteblk(), but this can't fail anyways. So I decided to go with trypbuf() instead of getpbuf() so that those functions (all called from ioctl) could fail and not block.

If you could get a crash dump of the blkfree panic onto leaf I'd like
to take a look at it.

Will do, but it will take some time, as it is >200mb compressed.

cheers
simon

Actions #11

Updated by dillon almost 18 years ago

:..
:> getpbuf() rather then trypbuf(). You can't have temporary resource=
:=20
:> failures in the places where trypbuf() is being used. This is unre=
:lated
:> to the bug, though.
:
:I am not sure that I understand what you mean: The original (and now use=
:d) code checks the result of geteblk(), but this can't fail anyways. So =
:I decided to go with trypbuf() instead of getpbuf() so that those functio=
:ns (all called from ioctl) could fail and not block.

geteblk() isn't supposed to fail under normal operating conditions.
Failures are considered, well, a fatal error.
trypbuf() CAN fail under normal operating conditions, just like
malloc(M_NOWAIT) can fail under normal operating conditions. Such
failures are not a fatal error, but vinum isn't designed to treat them
as non-fatal so what you will up with will be a bunch of odd vinum
failures that it doesn't recover from under certain load conditions.
-Matt

:
:> If you could get a crash dump of the blkfree panic onto leaf I'd li=
:ke=20
:> to take a look at it.
:
:Will do, but it will take some time, as it is >200mb compressed.
:
:cheers
: simon

Actions #12

Updated by dillon almost 18 years ago

:Usually I did a
: sync
: mount -a -u -o ro
:
:before fiddling with vinum, but /dev never re-mounted ro because of open =
:files. The other file systems though did, but nevertheless showed incons=
:istencies in fsck -f afterwards. I don't know if this might be connected=
:=2E

It could be.  Going from RW -> RO is not a direction that is tested
often.
-Matt
Actions

Also available in: Atom PDF