Panics at: assertion: ip->flush_group in hammer_wait_inode
I have two panics at:
hammer: debug: forcing async flush ip 00000002a32b8a6a
panic: assertion: ip->flush_group in hammer_wait_inode
Trace beginning at frame 0xcbd23be0
panic(ffffffff,c0817960,c064671b,cbd23c10,ca29b5a0) at panic+0x101
panic(c064671b,c0672a3d,c0630886,c03cccca,cbd23c60) at panic+0x101
hammer_wait_inode(ca29b5a0,1,1,0,0) at hammer_wait_inode+0x5b
hammer_vop_fsync(cbd23c60,c0790f68,c5802ea0,20002,20000) at hammer_vop_fsync+0x1c4
vop_fsync(c5802ea0,ca22c9b8,1,1,0) at vop_fsync+0x63
sys_fsync(cbd23cf0,cbd23d00,4,cbc38a38,cbd23cf0) at sys_fsync+0xab
syscall2(cbd23d40) at syscall2+0x235
Xint0x80_syscall() at Xint0x80_syscall+0x36
Physical memory: 245 MB
Dumping 119 MB: 104 88 72 56 40 24 8
Kernels are on leaf, /home/vsrinivas/cores/kern.1 and kern.2 ; cores are at
vmcore.1 and vmcore.2 there too; as are core.txt.1 and core.txt.2
#3 Updated by dillon almost 5 years ago
:Venkatesh Srinivas <firstname.lastname@example.org> added the comment:
:Something I forgot to mention, I'm running with fsync_mode = 0.
:Also, it is panic-ing at least once a day, often more.
I moved the cores to your ~/crash directory so they don't get backed
It looks to me like the assertion is wrong, but that it needs to recheck
the state of the inode after the setup. I'm verifying the state with
the core dump now.
#4 Updated by dillon almost 5 years ago
See if that fixes it. If it doesn't the fsync's will probably wind
up blocking forever as the failure case instead of panicing. Hopefully
I've covered the cases though.