Project

General

Profile

Actions

Bug #1997

closed

Panics at: assertion: ip->flush_group in hammer_wait_inode

Added by vsrinivas about 13 years ago. Updated about 13 years ago.

Status:
Closed
Priority:
High
Assignee:
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

Hi,

I have two panics at:

hammer: debug: forcing async flush ip 00000002a32b8a6a
panic: assertion: ip->flush_group in hammer_wait_inode
Trace beginning at frame 0xcbd23be0
panic(ffffffff,c0817960,c064671b,cbd23c10,ca29b5a0) at panic+0x101
panic(c064671b,c0672a3d,c0630886,c03cccca,cbd23c60) at panic+0x101
hammer_wait_inode(ca29b5a0,1,1,0,0) at hammer_wait_inode+0x5b
hammer_vop_fsync(cbd23c60,c0790f68,c5802ea0,20002,20000) at hammer_vop_fsync+0x1c4
vop_fsync(c5802ea0,ca22c9b8,1,1,0) at vop_fsync+0x63
sys_fsync(cbd23cf0,cbd23d00,4,cbc38a38,cbd23cf0) at sys_fsync+0xab
syscall2(cbd23d40) at syscall2+0x235
Xint0x80_syscall() at Xint0x80_syscall+0x36
Uptime: 1d0h2m44s
Physical memory: 245 MB
Dumping 119 MB: 104 88 72 56 40 24 8

Kernels are on leaf, /home/vsrinivas/cores/kern.1 and kern.2 ; cores are at
vmcore.1 and vmcore.2 there too; as are core.txt.1 and core.txt.2

Thanks,
-- vs

Actions #1

Updated by vsrinivas about 13 years ago

Actually, change on the URLs for the kernels/cores (I can't successfully copy
them to leaf).

http://acm.jhu.edu/~me/kern.1.gz
http://acm.jhu.edu/~me/kern.2.gz
http://acm.jhu.edu/~me/vmcore.1.gz
http://acm.jhu.edu/~me/vmcore.2.gz

Actions #2

Updated by vsrinivas about 13 years ago

Something I forgot to mention, I'm running with fsync_mode = 0.

Also, it is panic-ing at least once a day, often more.

Actions #3

Updated by dillon about 13 years ago

:Venkatesh Srinivas <> added the comment:
:
:Something I forgot to mention, I'm running with fsync_mode = 0.
:
:Also, it is panic-ing at least once a day, often more.

I moved the cores to your ~/crash directory so they don't get backed
up.
It looks to me like the assertion is wrong, but that it needs to recheck
the state of the inode after the setup. I'm verifying the state with
the core dump now.
-Matt
Matthew Dillon
&lt;&gt;
Actions #4

Updated by dillon about 13 years ago

Try:

fetch http://apollo.backplane.com/DFlyMisc/hammer27.patch
See if that fixes it.  If it doesn't the fsync's will probably wind
up blocking forever as the failure case instead of panicing. Hopefully
I've covered the cases though.
-Matt
Matthew Dillon
&lt;&gt;
Actions #5

Updated by pavalos about 13 years ago

I got this panic today on ylem with vfs.hammer.fsync_mode=3.

Actions #6

Updated by dillon about 13 years ago

Ok, hopefully the ip->flush_group panics are fixed now with
88d0131e03be720651defb821209e926d1bd1062.

-Matt
Matthew Dillon
&lt;&gt;
Actions #7

Updated by vsrinivas about 13 years ago

Looks like it. Survived a weekend of kernel and world building (wouldn't before).

Actions

Also available in: Atom PDF