DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082021-03-14T16:43:03ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3266 (New): Filesystems broken due to "KKASSERT(count & TOK_COUNTMASK);"https://bugs.dragonflybsd.org/issues/32662021-03-14T16:43:03Ztkusumikusumi.tomohiro@gmail.com
<p>Many fs including HAMMER2 are broken due to this assert failure.<br />Confirmed the panic with HAMMER2 and ext2.<br />It didn't happen a few months ago.</p>
<p>433 static __inline<br />434 void<br />435 _lwkt_reltokref(lwkt_tokref_t ref, thread_t td)<br />436 {<br />...<br />454 /*<br />455 * We are a shared holder<br />456 <strong>/<br />457 count = atomic_fetchadd_long(&tok->t_count, <del>TOK_INCR);<br />458 <abbr title="count & TOK_COUNTMASK">KKASSERT</abbr>; /</strong> count prior */ <----------</del><br />459 }<br />460 }</p> DragonFlyBSD - Bug #3194 (New): Hammer kernel crash on mirror-stream of PFS after upgrade (assert...https://bugs.dragonflybsd.org/issues/31942019-06-29T19:53:41ZAnonymous
<p>I operate two HAMMER arrays, each with PFSs that are mirrored via ``hammer mirror-stream`` from an array master to an array slave.</p>
<p>I recently upgraded to DragonFly 5.6.</p>
<p>After 18 hours of activity, I encountered a kernel fault that reliably can be provoked by running ``hammer mirror-stream`` on one PFS.</p>
<p>I've disabled PFS mirroring until this can be corrected.</p>
<p>Here is the transcribed error from the kernel fault:</p>
<p>```<br />panic: assertion "cursor->flags & HAMMER_CURSOR_ITERATE_CHECK" failed in hammer_btree_iterate at /usr/src/sys/vfs/hammer/hammer_btree.c:263<br />cpuid = 2<br />Trace beginning at frame 0xfffff801e6875158<br />hammer_btree_iterate() at hammer_btree_iterate+0x839 0xffffffff80900db9<br />hammer_btree_iterate() at hammer_btree_iterate+0x839 0xffffffff80900db9<br />hammer_mirror_delete_to.irsra.2() at hammer_mirror_delete_to.isra.2+0x12 0xffffffff80914d92<br />hammer_ioc_mirror_write() at hammer_ioc_mirror_write+0x3b7 0xffffffff809159a7<br />hammer_ioctl() at hammer_ioctl+0xebe 0xffffffff809143de<br />hammer_vop_ioctl() at hammer_vop_ioctl+0x48 0xffffffff8092d458<br />Debugger("panic")</p>
<p>CPU2 stopping CPUs: 0x0000000b<br /> stopped<br />Stopped at Debugger+0x7c: movb $0,0xfc9be9(%rip)<br />db> <br />```</p>
<p>Unfortunately, nothing was saved to /var/crash for this fault except an empty ``kern.0`` file.</p> DragonFlyBSD - Bug #3129 (New): Kernel panic with 5.2.0 on A2SDi-4C-HLN4Fhttps://bugs.dragonflybsd.org/issues/31292018-04-11T17:20:15Zstateless
<p>I tried to boot 5.2.0 on A2SDi-4C-HLN4F from Supermicro<br />and I got the following kernel panic:</p>
<p><a class="external" href="https://u.2f30.org/sin/2018-04-11-175500_1366x768_scrot.png">https://u.2f30.org/sin/2018-04-11-175500_1366x768_scrot.png</a></p>
<p>See: <a class="external" href="http://bxr.su/DragonFly/sys/kern/subr_cpu_topology.c#125">http://bxr.su/DragonFly/sys/kern/subr_cpu_topology.c#125</a></p> DragonFlyBSD - Bug #3113 (In Progress): Booting vKernel fails due being out of swap spacehttps://bugs.dragonflybsd.org/issues/31132017-12-17T19:17:50Ztcullen
<p>The step by step directions at <a class="external" href="https://www.dragonflybsd.org/docs/handbook/vkernel/">https://www.dragonflybsd.org/docs/handbook/vkernel/</a> fail because when you attempt to boot the vkernel you never get a shell prompt because the shell is instantly and repeatedly killed due to a never ending stream of "out of swap space" error messages.</p> DragonFlyBSD - Bug #3111 (In Progress): Mouse lags every second heavily under X11https://bugs.dragonflybsd.org/issues/31112017-12-10T17:30:40Zmneumann
<p>Mouse moves normal for one second, then completely stops for roughly one second, then moves again for a second.<br />The whole repeats infinitivly.</p>
<p>This commit has <strong>no</strong> mouse lag: e0a1e7abb95f53e4b0c633f57fbd3ba163a98e73<br />This commit has mouse lag: d6c92fb146a95bd38feaa94c8d2bafda63600e8e</p>
<p>I first assumed it is either one of the commits related to evdev support, so I reverted them:</p>
<pre><code>d3d1dd3e4513b2ab753f8ba52f144dc916420ba6<br /> 3ea800bb832ad69c10f85ce9bce98efd8e892285<br /> eaf0d054af4aa304548c1efc497aad966b86a590</code></pre>
<p>But the mouse lag still happens. Next I reverted the recent drm/radeon commit:</p>
<pre><code>d235ee5f3490f63ba738915974154a0e2e49378d</code></pre>
<p>But mouse lag still there. So I assume it must be one of dillon's commits on 5th or 6th December.</p>
<p>Anyone else seeing the problem? I can easily bisect the problem, as it's only 6 commits. But maybe dillon can see the problem faster.</p> DragonFlyBSD - Bug #2735 (New): iwn panics SYSSASSERThttps://bugs.dragonflybsd.org/issues/27352014-11-14T22:04:08Zcnbcneirabustos@gmail.com
<p>iwn driver panics with SYSASSERT error that is described in this link<br /><a class="external" href="http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html">http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html</a></p> DragonFlyBSD - Bug #2638 (Feedback): Fix machdep.pmap_mmu_optimizehttps://bugs.dragonflybsd.org/issues/26382014-02-13T21:51:39Ztuxillo
<p>Fix machdep.pmap_mmu_optimize (currently off by default in commit 1ac5304a10366be7ed3129ceee7ca94beb0f3183 ). Affects apache and rtorrent for sure.</p>
<p>"might be fixed here: a44410dd8663abb121417692995d3b365f32fd6e<br />update: it's not fixed"</p> DragonFlyBSD - Bug #2499 (In Progress): DRAGONFLY_3_2 lockd not responding correctlyhttps://bugs.dragonflybsd.org/issues/24992013-01-22T08:41:27ZNerzhul
<p>Hello,<br />i must use lockd for concurrent access on a webserver with nfs extended storage. There is some concurrent access and lockd isn't responding correctly.</p>
<p>On the NFSv3 client, timeout appears and console logs:<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again</p>
<p>After "netstat -an -f inet" i see there is a queue on rpc socket</p>
<p>netstat -an -f inet</p>
<p>Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.977 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.611 ESTABLISHED<br />tcp4 0 0 localhost.smtp <strong>.* LISTEN<br />tcp4 0 0 *.ssh *.</strong> LISTEN<br />tcp4 0 0 <strong>.1017 *.</strong> CLOSED<br />tcp4 0 0 <strong>.1020 *.</strong> LISTEN<br />tcp4 0 0 <strong>.nfsd *.</strong> LISTEN<br />tcp4 0 0 <strong>.1023 *.</strong> LISTEN<br />tcp4 0 0 <strong>.1022 *.</strong> LISTEN<br />tcp4 0 0 <strong>.sunrpc *.</strong> LISTEN<br />tcp4 0 0 A.B.C.65.nfsd A.B.C.96.811 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.972 ESTABLISHED<br />tcp4 0 48 A.B.C.65.ssh 129.175.196.190.60067 ESTABLISHED<br />udp4 0 0 <strong>.918 *.</strong> <br />udp4 0 0 A.B.C.65.1028 ntp.u-psud.fr.ntp <br />udp4 456 0 <strong>.1017 *.</strong> <br />udp4 18656 0 <strong>.1018 *.</strong> <br />udp4 0 0 <strong>.nfsd *.</strong> <br />udp4 0 0 <strong>.1021 *.</strong> <br />udp4 0 0 <strong>.1020 *.</strong> <br />udp4 0 0 <strong>.1022 *.</strong> <br />udp4 0 0 <strong>.sunrpc *.</strong></p>
<p>When i see that, i make tcpdump -nni em0 to see what's happening:</p>
<p>22:12:42.781597 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:48.801935 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:54.669917 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:13:00.148965 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212</p>
<p>After a little time, lockd respond to all request, but many failed because of timeout</p>
<p>On the dragonflyBSD server i can see this in /var/log/messages</p>
<p>Jan 21 22:14:19 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:19 webfiler1 last message repeated 3 times<br />Jan 21 22:14:19 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.<br />Jan 21 22:14:29 webfiler1 dntpd<sup><a href="#fn571">571</a></sup>: issuing offset adjustment: 0.026637s<br />Jan 21 22:14:44 webfiler1 rpc.lockd: rpc to statd failed: RPC: Timed out<br />Jan 21 22:14:44 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:44 webfiler1 last message repeated 3 times<br />Jan 21 22:14:44 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.</p>
<p>I think there is a problem on DragonFlyBSD which queue many lockd requests.</p> DragonFlyBSD - Bug #2423 (New): After multiple panics/locks, hitting KKASSERT in hammer_init_cursorhttps://bugs.dragonflybsd.org/issues/24232012-09-17T17:12:40Zrumcic
<p>After quite a few locks (machine stopped responding, nothing on serial console, unable to panic the machine), the hammer FS (v6) got a corrupted(?) UNDO.</p>
<p>Booting the latest snapshot and mounting with ro,noatime it looked as if it was able to rerun the UNDO/FIFO. But trying to mount the "clean" FS result in triping over a KKASSERT (error == 0 at hammer_cursor.c:202)</p> DragonFlyBSD - Bug #2396 (Feedback): Latest 3.1 development version core dumps while destroying m...https://bugs.dragonflybsd.org/issues/23962012-07-18T10:50:26Zsgeorgesgeorge.ml2@gmail.com
<p>Hi,</p>
<p>I was destroying a master PFS on the ROOT volume and the system ( v3.1.0.827.gf6167a5-DEVELOPMENT )core dumped.<br />I tried today's latest snapshot and got the same result.<br />The Coredump is uploaded to sgeorge@leaf:~/crash/Coredump20120718.tbz</p>
<p>panic: assertion "layer2->zone == zone" failed in hammer_blockmap_free at /usr/src/sys/vfs/hammer/hammer_blockmap.c:1020<br />cpuid = 0<br />Trace beginning at frame 0xffffffe09e20f178<br />panic() at panic+0x1fb 0xffffffff804bef68 <br />panic() at panic+0x1fb 0xffffffff804bef68 <br />hammer_blockmap_free() at hammer_blockmap_free+0x2e5 0xffffffff80691a0c <br />hammer_delete_at_cursor() at hammer_delete_at_cursor+0x4e2 0xffffffff806aac62 <br />hammer_pfs_rollback() at hammer_pfs_rollback+0x26c 0xffffffff806b0b20 <br />hammer_ioc_destroy_pseudofs() at hammer_ioc_destroy_pseudofs+0x77 0xffffffff806b0c6c <br />hammer_ioctl() at hammer_ioctl+0x80e 0xffffffff806a5b1e <br />hammer_vop_ioctl() at hammer_vop_ioctl+0x58 0xffffffff806be8d3 <br />vop_ioctl() at vop_ioctl+0x98 0xffffffff8053d244 <br />vn_ioctl() at vn_ioctl+0xfd 0xffffffff8053a4d9 <br />fo_ioctl() at fo_ioctl+0x46 0xffffffff804f026e <br />mapped_ioctl() at mapped_ioctl+0x493 0xffffffff804f0725 <br />sys_ioctl() at sys_ioctl+0x1c 0xffffffff804f07be <br />syscall2() at syscall2+0x370 0xffffffff807814c1 <br />Xfast_syscall() at Xfast_syscall+0xcb 0xffffffff8076ae2b <br />(null)() at 0 0 <br />(null)() at 0x723d524553550061 0x723d524553550061</p>
<p>Fatal trap 9: general protection fault while in kernel mode<br />cpuid = 0; lapic->id = 00000000<br />instruction pointer = 0x8:0xffffffff8077acf9<br />stack pointer = 0x10:0xffffffe09e20f010<br />frame pointer = 0x10:0xffffffe09e20f028<br />code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 0, def32 0, gran 1<br />processor eflags = interrupt enabled, resume, IOPL = 0<br />current process = 957<br />current thread = pri 10 <br />kernel: type 9 trap, code=0</p>
<p>CPU0 stopping CPUs: 0x00000002<br /> stopped<br />Physical memory: 3787 MB<br />Dumping 1055 MB: 1040 1024 1008 992 976 960 944 928 912 896 880 864 848 832 816 800 784 768 752 736 720 704 688 672 656 640 624 608 592 576 560 544 528 512 496 480 464 448 432 416 400 384 368 352 336 320 304 288 272 256 240 224 208 192 176 160 144 128 112 96 80 64 48 32 16</p> DragonFlyBSD - Bug #2347 (Feedback): Hammer PFSes destroy does not give back full space allocated...https://bugs.dragonflybsd.org/issues/23472012-04-11T07:17:48Zsgeorgesgeorge.ml2@gmail.com
<p>I was mirroring PFSes from 3.1 dev to slaves in 3.02 and I found that<br />the PFSes took more space on the 3.02 slave.<br />Investigating I found this strange thing</p>
<p>94 GB is allocated for this slave PFS. But when it is removed only 52<br />GB is freed :-(</p>
<p>dfly-bkpsrv2# hammer dedup /pfs/software<br />Dedup running<br />Dedup /pfs/software succeeded<br />Dedup ratio = 1.06<br /> 100 GB referenced<br /> 94 GB allocated<br /> 4339 KB skipped<br /> 429 CRC collisions<br /> 0 SHA collisions<br /> 1 bigblock underflows<br /> 0 new dedup records<br /> 0 new dedup bytes</p>
<p>dfly-bkpsrv2# df -h<br />Filesystem Size Used Avail Capacity Mounted on<br />ROOT 459G 354G 106G 77% /<br />devfs 1.0K 1.0K 0B 100% /dev<br />/dev/serno/QM00001.s1a 756M 168M 527M 24% /boot<br />/pfs/<code>@-1:00001 459G 354G 106G 77% /var<br />/pfs/</code>@-1:00002 459G 354G 106G 77% /tmp<br />/pfs/<code>@-1:00003 459G 354G 106G 77% /usr<br />/pfs/</code>@-1:00004 459G 354G 106G 77% /home<br />/pfs/<code>@-1:00005 459G 354G 106G 77% /usr/obj<br />/pfs/</code>@-1:00006 459G 354G 106G 77% /var/crash<br />/pfs/<code>@-1:00007 459G 354G 106G 77% /var/tmp<br />procfs 4.0K 4.0K 0B 100% /proc<br />dfly-bkpsrv2# ls<br />home software usr var<br />var.tmp vms2-lxc<br />mysql-baks tmp usr.obj var.crash vms1-lxc<br />dfly-bkpsrv2# hammer pfs-destroy /pfs/software<br />You have requested that PFS#11 () be destroyed<br />This will irrevocably destroy all data on this PFS!!!!!<br />Do you really want to do this? y<br />Destroying PFS #11 () in 5 4 3 2 1.. starting destruction pass<br />pfs-destroy of PFS#11 succeeded!<br />dfly-bkpsrv2# df -h<br />Filesystem Size Used Avail Capacity Mounted on<br />ROOT 459G 302G 158G 66% /<br />devfs 1.0K 1.0K 0B 100% /dev<br />/dev/serno/QM00001.s1a 756M 168M 527M 24% /boot<br />/pfs/</code>@-1:00001 459G 302G 158G 66% /var<br />/pfs/<code>@-1:00002 459G 302G 158G 66% /tmp<br />/pfs/</code>@-1:00003 459G 302G 158G 66% /usr<br />/pfs/<code>@-1:00004 459G 302G 158G 66% /home<br />/pfs/</code>@-1:00005 459G 302G 158G 66% /usr/obj<br />/pfs/<code>@-1:00006 459G 302G 158G 66% /var/crash<br />/pfs/</code>@-1:00007 459G 302G 158G 66% /var/tmp<br />procfs 4.0K 4.0K 0B 100% /proc</p> DragonFlyBSD - Bug #2296 (In Progress): panic: assertion "m->wire_count > 0" failedhttps://bugs.dragonflybsd.org/issues/22962012-02-02T06:56:02Zthomas.nikolajsen
<p>With recent (29/1-12) master and rel3_0 I get this panic<br />during parallel make release and buildworld, e.g.:<br />'make MAKE_JOBS=10 release' (i.e. make -j10)<br />i386 STANDARD<br />(custom kernel, includes INCLUDE_CONFIG_FILE)<br />on 8 core host (opteron).</p>
<p>Got this panic twice; succeeds w/o MAKE_JOBS</p>
<p>Core dump at leaf: ~thomas:crash/octopus.i386.3</p> DragonFlyBSD - Bug #2141 (New): loader and/or documentation brokenhttps://bugs.dragonflybsd.org/issues/21412011-10-09T14:15:29Zsjg
<p>For example,</p>
<pre><code>The ehci driver is automatically loaded upon boot. To disable this<br /> behavior temporarily, the ehci_load variable can be unset at the loader<br /> prompt (see loader(8)). To disable it permanently, the<br /> hint.ehci.0.disabled tunable can be set to 1 in /boot/loader.conf.</code></pre>
<p>But when operating from the loader prompt the ehci_load variable has no effect <br />at all, it seems to only be checked from the menu, which is useless if you are <br />operating from the prompt.</p>
<p>This is confusing at best, but I am leaning more towards steaming pile. The <br />loader or the documentation needs to be reworked.</p> DragonFlyBSD - Bug #884 (In Progress): Performance/memory problems under filesystem IO loadhttps://bugs.dragonflybsd.org/issues/8842007-12-14T18:34:28Zhasso
<p>During testing drive with dd I noticed that there are serious performance <br />problems. Programs which need disk access, block for 10 and more seconds. <br />Sometimes they don't continue the work until dd is finished. Raw disk access <br />(ie not writing to file, but directly to the disk) is reported to be OK (I <br />can't test it myself).</p>
<p>All tests are done with this command:<br />dd if=/dev/zero of=./file bs=4096k count=1000</p>
<p>Syncing after each dd helps to reproduce it more reliably (cache?).</p>
<p>There is one more strange thing in running these tests. I looked at memory <br />stats in top before and after running dd.</p>
<p>Before:<br />Mem: 42M Active, 40M Inact, 95M Wired, 304K Cache, 53M Buf, 795M Free<br />After:<br />Mem: 70M Active, 679M Inact, 175M Wired, 47M Cache, 109M Buf, 1752K Free</p>
<p>And as a side effect - I can't get my network interfaces up any more after <br />running dd - "em0: Could not setup receive strucutres".</p> DragonFlyBSD - Bug #599 (New): 1.9.0 reproducable panichttps://bugs.dragonflybsd.org/issues/5992007-04-11T10:24:26Zpavalos
<p>Here's a panic I'm getting with some pretty serious network (www) load, then <br />doing a netstat -an:</p>
<p>Unread portion of the kernel message buffer:<br />panic: m_copydata, negative off -1<br />mp_lock = 00000000; cpuid = 0; lapic.id = 00000000<br />boot() called on cpu#0</p>
<p>syncing disks... 5<br />done<br />Uptime: 12d22h0m32s</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc01954bb in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:370<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc01957c0 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/src/sys/kern/kern_shutdown.c:767<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc01c3a32 in m_copydata (m=0x0, off=0, len=0, cp=0xee9534b0 "\001\001<br />\b\n\006¦*$\035\bͬ") at /usr/src/sys/kern/uipc_mbuf.c:1014<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc020fc25 in tcp_output (tp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_output.c:690<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc02152bf in tcp_timer_persist (xtp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_timer.c:363<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01a6423 in softclock_handler (arg=0xc0386a80) <br />at /usr/src/sys/kern/kern_timeout.c:307<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc019d037 in lwkt_deschedule_self (td=Variable "td" is not available.<br />) at /usr/src/sys/kern/lwkt_thread.c:207<br />Previous frame inner to this frame (corrupt stack?)</p>
<p>The kernel and vmcore is being uploaded to leaf. The source is from March 28.</p>
<p>--Peter</p>