Bug #2350
closedvm panic when fs is full
0%
Description
This panic happens with a very recent master when a HAMMER fs fills up:
panic: assertion "m->flags & PG_BUSY" failed in vm_page_protect at
/usr/src/sys/vm/vm_page.h:532
(kgdb) bt
#0 _get_mycpu () at ./machine/thread.h:69
#1 md_dumpsys (di=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/dump_machdep.c:265
#2 0xffffffff802bcc02 in dumpsys () at /usr/src/sys/kern/kern_shutdown.c:937
#3 0xffffffff802bd266 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:387
#4 0xffffffff802bd51d in panic (fmt=0xffffffff804aee30 "assertion \"%s\" failed in %s at %s:%u") at /usr/src/sys/kern/kern_shutdown.c:843
#5 0xffffffff8041ad33 in vm_page_protect (prot=<optimized out>, m=<optimized out>) at /usr/src/sys/vm/vm_page.h:532
#6 vm_object_page_collect_flush (object=<optimized out>, p=<optimized out>, pagerflags=<optimized out>) at /usr/src/sys/vm/vm_object.c:1348
#7 0xffffffff8041aefc in vm_object_page_clean_pass2 (p=0xffffffe001bf0f00, data=<optimized out>) at /usr/src/sys/vm/vm_object.c:1230
#8 0xffffffff8041eaf3 in vm_page_rb_tree_RB_SCAN (head=<optimized out>, scancmp=0xffffffff8041cbf4 <rb_vm_page_scancmp>, callback=<optimized out>,
data=0xffffffe1227afae0) at /usr/src/sys/vm/vm_page.c:127
#9 0xffffffff80419f0c in vm_object_page_clean (object=0xffffffe1268feb60, start=<optimized out>, end=<optimized out>, flags=<optimized out>)
at /usr/src/sys/vm/vm_object.c:1135
#10 0xffffffff803287ce in vfs_msync_scan2 (mp=<optimized out>, vp=<optimized out>, data=<optimized out>) at /usr/src/sys/kern/vfs_subr.c:2229
#11 0xffffffff8032e005 in vmntvnodescan (mp=0xffffffe0fd52d278, flags=<optimized out>, fastfunc=<optimized out>, slowfunc=<optimized out>, data=<optimized out>)
at /usr/src/sys/kern/vfs_mount.c:1109
#12 0xffffffff80328789 in vfs_msync (mp=<unavailable>, flags=<optimized out>) at /usr/src/sys/kern/vfs_subr.c:2182
#13 0xffffffff80331371 in sync_fsync (ap=<optimized out>) at /usr/src/sys/kern/vfs_sync.c:569
#14 0xffffffff8033afd8 in vop_fsync (ops=0xffffffff80743380, vp=<unavailable>, waitfor=<unavailable>, flags=<unavailable>) at /usr/src/sys/kern/vfs_vopops.c:572
#15 0xffffffff8033123e in syncer_thread (_ctx=<optimized out>) at /usr/src/sys/kern/vfs_sync.c:341
#16 0xffffffff80331302 in syncer_thread_start () at /usr/src/sys/kern/vfs_sync.c:424
#17 0xffffffff802acc20 in suspend_kproc (td=<unavailable>, timo=<unavailable>) at /usr/src/sys/kern/kern_kthread.c:195
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
DragonFly ylem.theshell.com 3.1-DEVELOPMENT DragonFly v3.1.0.575.g19af18-DEVELOPMENT #16: Sat Apr 21 14:03:23 PDT 2012 root@ylem.theshell.com:/usr/obj/usr/src/sys/YLEM64 x86_64
Core is on ylem (if you have access), and I'm uploading to
leaf:~pavalos/crash/*.1
--Peter
Files