Project

General

Profile

Actions

Bug #1015

closed

File system panic on recent HEAD

Added by matthias over 16 years ago. Updated almost 16 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

Hi,

one of my boxes building pkgsrc HEAD packages on DF HEAD crashed last
night. Seems like a panic related to the file system. IIRC I saw the
panic before (about 2-3 times), but I was to busy to report it :/

Last system messages:

[...]
linux: syscall madvise is obsoleted or not implemented (pid=66339)
linux: syscall madvise is obsoleted or not implemented (pid=66339)
linux: syscall madvise is obsoleted or not implemented (pid=66339)
linux: syscall madvise is obsoleted or not implemented (pid=66339)

dev = #ad/0x1e130057, ino = 14540, fs = /build
panic: ffs_vfree: freeing free inode
mp_lock = 00000000; cpuid = 0
boot() called on cpu#0

syncing disks... Warning: vfsync_bp skipping dirty buffer
0xc455phd0\240\^A\^F\M^Xr\M^^{\M-:4\M--\^E\^Bo\^TB\M^U.M\2409\M^N+\M-9\M-P\M-
3\M^]{L\M-r9w\M-W\M^S\M-T\^Q\M^Z:\M^P'?\M-AM\^X\M-`\M^W}\M^_\^D\^YL7A\M-SM\M^\\^P\M^K\M^W\M-(?\M-}\M-z\M-@\M-<\M^]{w.\^UI\M-Z\^ZdG\M-7\
M-"z\M-~\M-~\M-Y^\M^Kxa\M^^{0-\M^]\M-k\M^H-\M-S\^Q\M^Z\^V\M^P'?\M-AM\M-P\M-?}1-\M^]\M-k\M^H-\M-S\^Q\M^Z\^V\M^P'?\M-AM\M^X\M-?\M-,}4
77 Warning: vfsync_bp skipping dirty buffer 0xc4554784
76 Warning: vfsync_bp skipping dirty buffer 0xc4554784
[...]

Writing a core dump to disk failed, so I cannot provide more
information. The machine is running HEAD:

DragonFly pc12909 1.13.0-DEVELOPMENT DragonFly 1.13.0-DEVELOPMENT #6:
Mon May 19 10:35:40 CEST 2008
root@pc12909:/usr/obj/usr/src/sys/PC12909 i386

Regards

Matthias
Actions #1

Updated by dillon over 16 years ago

:Hi,
:
:one of my boxes building pkgsrc HEAD packages on DF HEAD crashed last
:night. Seems like a panic related to the file system. IIRC I saw the
:panic before (about 2-3 times), but I was to busy to report it :/
:
:Last system messages:
:
:[...]
:linux: syscall madvise is obsoleted or not implemented (pid=66339)
:linux: syscall madvise is obsoleted or not implemented (pid=66339)
:linux: syscall madvise is obsoleted or not implemented (pid=66339)
:linux: syscall madvise is obsoleted or not implemented (pid=66339)
:
:dev = #ad/0x1e130057, ino = 14540, fs = /build
:panic: ffs_vfree: freeing free inode
:mp_lock = 00000000; cpuid = 0
:boot() called on cpu#0
:
:syncing disks... Warning: vfsync_bp skipping dirty buffer
:0xc455phd0\240\^A\^F\M^Xr\M^^{\M-:4\M--\^E\^Bo\^TB\M^U.M\2409\M^N+\M-9\M-P\M-
:3\M^]{L\M-r9w\M-W\M^S\M-T\^Q\M^Z:\M^P'?\M-AM\^X\M-`\M^W}\M^_\^D\^YL7A\M-SM\M^\\^P\M^K\M^W\M-(?\M-}\M-z\M-@\M-<\M^]{w.\^UI\M-Z\^ZdG\M-7\
:M-"z\M-~\M-~\M-Y^\M^Kxa\M^^{0-\M^]\M-k\M^H-\M-S\^Q\M^Z\^V\M^P'?\M-AM\M-P\M-?}1-\M^]\M-k\M^H-\M-S\^Q\M^Z\^V\M^P'?\M-AM\M^X\M-?\M-,}4
:77 Warning: vfsync_bp skipping dirty buffer 0xc4554784
:76 Warning: vfsync_bp skipping dirty buffer 0xc4554784
:[...]
:
:Writing a core dump to disk failed, so I cannot provide more
:information. The machine is running HEAD:
:
:DragonFly pc12909 1.13.0-DEVELOPMENT DragonFly 1.13.0-DEVELOPMENT #6:
:Mon May 19 10:35:40 CEST 2008
:root@pc12909:/usr/obj/usr/src/sys/PC12909 i386
:
:Regards
:
: Matthias

Matthias, did that partition (/build) at any point in the past fill up
to 100% ?
There is a bug in UFS that FreeBSD fixed but we haven't yet that can
cause filesystem corruption to occur when the filesystem fills up to
100% and runs out of space.
In anycase, what I recommend is that you umount the partition and do
another manual fsck of it, then be careful not to fill it up and see
if the problem reoccurs.
-Matt
Matthew Dillon
&lt;&gt;
Actions #2

Updated by matthias over 16 years ago

Hi,

No. The partition has enough space left:

/dev/ad10s2 256G 33G 203G 14% /build

Manual fsck did not help. I did it several times before and now again.
Sadly the machine crashed again last night. And again no coredump:

dev = #ad/0x1e130057, block = 30408, fs = /build
panic: ffs_blkfree: freeing free block
mp_lock = 00000000; cpuid = 0
boot() called on cpu#0

syncing disks... Warning: vfsync_bp skipping dirty buffer 0xc4491550
640 Warning: vfsync_bp skipping dirty buffer 0xc4491550
619 Warning: vfsync_bp skipping dirty buffer 0xc4491550
[...]

Regards

Matthias
Actions #3

Updated by dillon over 16 years ago

:Hi,
:
:* Matthew Dillon wrote:
:>
:> Matthias, did that partition (/build) at any point in the past fill up
:> to 100% ?
:
:No. The partition has enough space left:
:
:/dev/ad10s2 256G 33G 203G 14% /build
:
:> In anycase, what I recommend is that you umount the partition and do
:> another manual fsck of it, then be careful not to fill it up and see
:> if the problem reoccurs.
:
:Manual fsck did not help. I did it several times before and now again.
:Sadly the machine crashed again last night. And again no coredump:
:
:dev = #ad/0x1e130057, block = 30408, fs = /build
:panic: ffs_blkfree: freeing free block
:mp_lock = 00000000; cpuid = 0
:boot() called on cpu#0
:
:syncing disks... Warning: vfsync_bp skipping dirty buffer 0xc4491550
:640 Warning: vfsync_bp skipping dirty buffer 0xc4491550
:619 Warning: vfsync_bp skipping dirty buffer 0xc4491550
:[...]
:
:Regards
:
: Matthias

Another possibility is that fsck is not able to repair the problem.
Try completely wiping the filesystem by re-newfs'ing it maybe?
If THAT doesn't work then we've wiped out any chance of it being the
only currently known UFS bug and something in the package build you
are running is tweaking another bug.
I am going to go ahead and bring in the fix from FreeBSD related to
full UFS filesystems, but I don't think it fixes the issue you are
hitting.
Which leads us to a second option: Get the absolute latest HEAD running
on that box and run a HAMMER filesystem for that partition instead of
UFS. With the proviseo that you absolutely cannot fill it up or HAMMER
will blow up (that being the last bit of the filesystem that I am
working on this week). If there is any corruption occuring in the
driver/DMA path HAMMER should be able to detect it. Also make sure
the latest version of the hammer utility and newfs_hammer is installed
from /usr/src/sbin/{newfs_hammer,hammer}.
Something like this:
newfs_hammer -L BUILD /dev/ad10s2
And in fstab:
/dev/ad10s2    /build    hammer     rw,nohistory   2   2
I'm presuming you wouldn't want it to retain historical information.
However, even if you mount with the nohistory option it is still a
good idea to reblock the filesystem every so often to clean up loose
ends. Something like this in a once-a-day cron job would do the
trick:
hammer -t 600 -c /var/db/reblock-build.cycle reblock-inodes /build 100
hammer -t 600 -c /var/db/reblock-build.cycle reblock-btree /build 100
hammer -t 600 -c /var/db/reblock-build.cycle reblock-data /build 95
In anycase, I am not suggesting that you start using HAMMER for
production stuff yet, but it might give us another data point on whether
the corruption is due to a bug in UFS, or due to a bug in the rest of
the kernel, or due to driver/DMA issues. And there's a risk you might
hit a HAMMER bug too and start tearing your hair out for real.
I am going to also note that this particular bug has been reported
sporatically on FreeBSD and DragonFly, NetBSD, and OpenBSD.
-Matt
Matthew Dillon
&lt;&gt;
Actions #4

Updated by dillon over 16 years ago

: hammer -t 600 -c /var/db/reblock-build.cycle reblock-inodes /build 100
: hammer -t 600 -c /var/db/reblock-build.cycle reblock-btree /build 100
: hammer -t 600 -c /var/db/reblock-build.cycle reblock-data /build 95

Sorry, I messed up that example.  The cycle file stores the object id
the scan stopped at after reaching the timeout, so it can pick up from
that point the next time. Each type of reblocking scan should use a
different file, so:
hammer -t 600 -c /var/db/reblock-build1.cycle reblock-inodes /build 100
hammer -t 600 -c /var/db/reblock-build2.cycle reblock-btree /build 100
hammer -t 600 -c /var/db/reblock-build3.cycle reblock-data /build 95
-Matt
Matthew Dillon
&lt;&gt;
Actions #5

Updated by matthias over 16 years ago

I'm out of office for the next 5 days, so no chance to newfs the
filesystem. Within the 5 days most of the pkgsrc packages should be
ready, so I can safely erase the partition. With a proper backup it
doesn't matter if I install HAMMER or UFS ... Will see when I'm back :)

Regards

Matthias
Actions #6

Updated by matthias over 16 years ago

HI,

So, the box is back up. I experienced a crash again while copying 42GB
from the UFS /build partition to another UFS partition. Now the /build
partition is running HAMMER (latest HEAD from today) and I'm doing a
number of buildworld/buildkernel cycles to test the stability of the fs.
I'll report if something strange happens.

Regards

Matthias
Actions #7

Updated by matthias over 16 years ago

OK, the box crashed again, two times. For the first crash I don't have
a failure message neither a crash dump. The second crash is as follows:

Jun 4 15:47:26 pc12909 kernel: HAMMER Start Recovery
30000000006836c8 - 30000000006882a8 (19424 bytes of UNDO)(RW)
Jun 4 15:47:26 pc12909 kernel: HAMMER End Recovery
Jun 4 15:50:03 pc12909 kernel: Debugger("CRC FAILED: DATA") called.
Jun 4 16:02:12 pc12909 syslogd: kernel boot file is /kernel
Jun 4 16:02:12 pc12909 kernel: Debugger("CRC FAILED: B-TREE NODE")
called.
Jun 4 16:02:12 pc12909 kernel: panic: node 0xc3f8b698 0000000000000000
vs 8000000004c87400
Jun 4 16:02:12 pc12909 kernel: Jun 4 16:02:12 pc12909 kernel: mp_lock = 00000000; cpuid = 0
Jun 4 16:02:12 pc12909 kernel: boot() called on cpu#0
Jun 4 16:02:12 pc12909 kernel:
Jun 4 16:02:12 pc12909 kernel: syncing disks... 912 907 906 905 904 903
902 901 900 899 898 897 896 895 895 895 895 895 895 895 895 89
5 895 895 895 895 895 895 895 895 895 895 895
Jun 4 16:02:12 pc12909 kernel: doneJun 4 16:02:12 pc12909 kernel:
Uptime: 13m29s
Jun 4 16:02:12 pc12909 kernel:
Jun 4 16:02:12 pc12909 kernel: dumping to dev #ad/0x20051, blockno
4269104
Jun 4 16:02:12 pc12909 kernel: dump 2011

Seems its somewhat related to HAMMER?!? The fs is not full, there is
plenty of space left ... just in case someone wants to know:

ad10s2 9.8G 2.1G 7.7G 22% /hammer

Regards

Matthias
Actions #8

Updated by dillon over 16 years ago

:OK, the box crashed again, two times. For the first crash I don't have
:a failure message neither a crash dump. The second crash is as follows:
:
:Jun 4 15:47:26 pc12909 kernel: HAMMER Start Recovery
:30000000006836c8 - 30000000006882a8 (19424 bytes of UNDO)(RW)
:Jun 4 15:47:26 pc12909 kernel: HAMMER End Recovery
:Jun 4 15:50:03 pc12909 kernel: Debugger("CRC FAILED: DATA") called.
:Jun 4 16:02:12 pc12909 syslogd: kernel boot file is /kernel
:Jun 4 16:02:12 pc12909 kernel: Debugger("CRC FAILED: B-TREE NODE")
:called.
:Jun 4 16:02:12 pc12909 kernel: panic: node 0xc3f8b698 0000000000000000
:vs 8000000004c87400
:Jun 4 16:02:12 pc12909 kernel: Jun 4 16:02:12 pc12909 kernel: mp_lock
:...
:
:Seems its somewhat related to HAMMER?!? The fs is not full, there is
:plenty of space left ... just in case someone wants to know:
:
:ad10s2 9.8G 2.1G 7.7G 22% /hammer
:
:Regards
:
: Matthias

Not related to HAMMER, other then HAMMER is detecting that the
information on the disk has become corrupt.
I see three possibilities.  The most likely explanation is that
your system memory has a hardware glitch and is becoming corrupt.
A second possible explanation is that the disk driver's DMA is
corrupting the data when it writes it to disk.
The last possibility is software.  It could be that a bug in the
kernel is being exercised by the package build, but if you were only
running buildworld tests after creating that HAMMER filesystem
that isn't a likely scenario.
In all my testing of HAMMER so far I have never actually gotten a
real CRC mismatch. I've always had to go in and munge a few bytes
on the disk image to get it to fail.
I'll bet you have a system memory issue.  Either something is
overheating or one of your ram sticks is heading towards failure. There
are a few things you can try before you start ripping the machine apart:
  • Go into the BIOS setup and see if it has options to adjust the dynamic
    ram timing, FSB (front side bus) frequency, and cpu frequency. If it
    does, slow them ALL down a little and see if the problem goes away.
  • Check the temperature on all major chips on the MB by touching the top
    of the chip. Also check the temperature of your ram sticks.
  • Check that the hard drive is not overheating.
Actions #9

Updated by ftigeot over 16 years ago

FWIW, I have been running with a HAMMER filesystem on /home since May 25.

I have not had any problem with it since Matt fixed the owner set to root
issue :-)

Actions #10

Updated by dillon over 16 years ago

:FWIW, I have been running with a HAMMER filesystem on /home since May 25.
:
:I have not had any problem with it since Matt fixed the owner set to root
:issue :-)
:
:--
:Francois Tigeot

Excellent.  I recently added filesystem-full detection.  Note however
that any filesystems formatted prior to the addition of that code will
report too many free blocks and the filesystem full detection will
not work properly. Also, newer versions of newfs_hammer pre-allocate
ALL the blockmap infrastructure.
Right now I am working on the read() and write() path.  I have removed
the double-copy from the read() path and I expect to be able to remove
the double-copy from the write() path as well as have the strategy
write code actually perform the IO directly, instead of queueing the
BIO to the flusher. That should double HAMMER's write() efficiency
and jump its write performance up to the platter speed, as well as
fix issues that have cropped up in 'blogbench' testing where pending
writes sometimes cause HAMMER's buffer efficiency to drop drastically.
I should be able to commit it in the next 2 days or so, I'm hoping.
It is also clear to me that I need to have a catastrophic recovery mode,
for situations where the filesystem becomes corrupt beyond the control
of the software (such as due to memory corruption or a failing disk or
something like that). That is, a way to extract good filesystem objects
from a mostly destroyed filesystem. HAMMER has a huge amount of
referential redundancy due to the typed blockmaps it uses. Even the
freemap has typed back-pointers into the blockmaps! It is literally
possible to simply scan the the FREEMAP zone, locate all the big-blocks
related to B-Tree nodes, scan them linearly, and reconstruct the
entire filesytem from that information.
Those two things, plus further stabilization testing, are going to keep
me busy through to the release in mid-July.
There is currently one known bug, which I also intend to fix this month.
The bug is that if you have a really large file and you 'rm' it, HAMMER
can run out of UNDO space (and panic) because it tries to remove all the
records associated with the file all in a single sync cycle. This
is easy to reproduce on small filesystems but can still happen on large
filesystems even if they reserve the maximum 512M of UNDO FIFO space,
if the file is really big (like hundreds of gigabytes big).
-Matt
Matthew Dillon
&lt;&gt;
Actions #11

Updated by dillon almost 16 years ago

Should be 99% resolved though there may still be issued with tiny
non-recommended filesystem sizes.

Actions

Also available in: Atom PDF