Bug #1900
closedhammer crash when running fsstress on an -o nohistory'd file system
Added by swildner about 14 years ago. Updated about 14 years ago.
0%
Description
Hi,
HAMMER crashes reliably when running fsstress on a file system that is
mounted -o nohistory.
Crash dump is in ~swildner/crash/crash27.tbz on leaf.
Regards,
Sascha
Updated by dillon about 14 years ago
:Hi,
:
:HAMMER crashes reliably when running fsstress on a file system that is
:mounted -o nohistory.
:
:Crash dump is in ~swildner/crash/crash27.tbz on leaf.
:
:Regards,
:Sascha
Ok, I'll try to reproduce it. If you could generate another core
the one you got unfortunately doesn't seem to have the backtrace
of the actual panic on it. I'm not sure why.
It is possible that the assertion itself is incorrect but I need
to track down the actual record state to figure out if that is so.
Under fsstress conditions HAMMER is sometimes unable to fully flush
a file in single flush cycle which could potentially be the cause
of the mismatched flush groups between inode and record.
-Matt
Matthew Dillon
<dillon@backplane.com>
Updated by swildner about 14 years ago
On 11/7/2010 23:33, Matthew Dillon wrote:
:Hi,
:
:HAMMER crashes reliably when running fsstress on a file system that is
:mounted -o nohistory.
:
:Crash dump is in ~swildner/crash/crash27.tbz on leaf.
:
:Regards,
:SaschaOk, I'll try to reproduce it. If you could generate another core
the one you got unfortunately doesn't seem to have the backtrace
of the actual panic on it. I'm not sure why.
OK, it's ~swildner/crash/crash28.tbz on leaf.
Here's the backtrace, too (seems it fails in hammer_setup_child_callback):
(kgdb) bt
#0 _get_mycpu (di=0xc0762b20) at ./machine/thread.h:83
#1 md_dumpsys (di=0xc0762b20) at
/home/s/projects/dragonfly/src/sys/platform/pc32/i386/dump_machdep.c:263
#2 0xc030dfe9 in dumpsys () at
/home/s/projects/dragonfly/src/sys/kern/kern_shutdown.c:881
#3 0xc017d441 in db_fncall (dummy1=2, dummy2=0, dummy3=-1068125644,
dummy4=0xd98e59f0 "") at
/home/s/projects/dragonfly/src/sys/ddb/db_command.c:542
#4 0xc017d932 in db_command () at
/home/s/projects/dragonfly/src/sys/ddb/db_command.c:344
#5 db_command_loop () at
/home/s/projects/dragonfly/src/sys/ddb/db_command.c:470
#6 0xc017ffc4 in db_trap (type=3, code=0) at
/home/s/projects/dragonfly/src/sys/ddb/db_trap.c:71
#7 0xc055b438 in kdb_trap (type=3, code=0, regs=0xd98e5af4) at
/home/s/projects/dragonfly/src/sys/platform/pc32/i386/db_interface.c:152
#8 0xc0574947 in trap (frame=0xd98e5af4) at
/home/s/projects/dragonfly/src/sys/platform/pc32/i386/trap.c:831
#9 0xc055c7b7 in calltrap () at
/home/s/projects/dragonfly/src/sys/platform/pc32/i386/exception.s:785
#10 0xc055b234 in breakpoint (msg=0xc05fd481 "panic") at ./cpu/cpufunc.h:73
#11 Debugger (msg=0xc05fd481 "panic") at
/home/s/projects/dragonfly/src/sys/platform/pc32/i386/db_interface.c:334
#12 0xc030e878 in panic (fmt=0xc05e1d75 "assertion: %s in %s") at
/home/s/projects/dragonfly/src/sys/kern/kern_shutdown.c:785
#13 0xc04aff8d in hammer_setup_child_callback (rec=0x1, data=0x0) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_inode.c:2306
#14 0xc04b7250 in hammer_rec_rb_tree_RB_SCAN (head=0xe2e61a80,
scancmp=0xc04b715e <hammer_rec_rb_tree_SCANCMP_ALL>,
callback=0xc04afe0a <hammer_setup_child_callback>, data=0x0) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_object.c:257
#15 0xc04afc4c in hammer_flush_inode_core (ip=0xe2e61920,
flg=0xd648b978, flags=<value optimized out>)
at /home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_inode.c:2049
#16 0xc04b0a24 in hammer_flush_inode (ip=0xe2e61920, flags=1) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_inode.c:1718
#17 0xc04b0abe in hammer_test_inode (ip=0xe2e61920) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_inode.c:3113
#18 0xc04b87d3 in hammer_flush_record_done (record=0xe0ea4c50, error=0)
at /home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_object.c:334
#19 0xc04afa90 in hammer_sync_record_callback (record=0xe0ea4c50,
data=0xd98e5c98) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_inode.c:2694
#20 0xc04b7250 in hammer_rec_rb_tree_RB_SCAN (head=0xdfee85c0,
scancmp=0xc04b715e <hammer_rec_rb_tree_SCANCMP_ALL>,
callback=0xc04af87e <hammer_sync_record_callback>, data=0xd98e5c98)
at /home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_object.c:257
#21 0xc04b2374 in hammer_sync_inode (trans=0xd9824124, ip=0xdfee8460) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_inode.c:2882
#22 0xc04ad99f in hammer_flusher_flush_inode (arg=0xd586dce8) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_flusher.c:512
#23 hammer_flusher_slave_thread (arg=0xd586dce8) at
/home/s/projects/dragonfly/src/sys/vfs/hammer/hammer_flusher.c:455
#24 0xc0318342 in lwkt_deschedule_self (td=Cannot access memory at
address 0x8
) at /home/s/projects/dragonfly/src/sys/kern/lwkt_thread.c:258
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
Updated by swildner about 14 years ago
Fixed in e3c8589cf87683a44e2377b25d0ebed99124e275