DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082017-12-17T19:17:50ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3113 (In Progress): Booting vKernel fails due being out of swap spacehttps://bugs.dragonflybsd.org/issues/31132017-12-17T19:17:50Ztcullen
<p>The step by step directions at <a class="external" href="https://www.dragonflybsd.org/docs/handbook/vkernel/">https://www.dragonflybsd.org/docs/handbook/vkernel/</a> fail because when you attempt to boot the vkernel you never get a shell prompt because the shell is instantly and repeatedly killed due to a never ending stream of "out of swap space" error messages.</p> DragonFlyBSD - Bug #2828 (New): On AMD APUs and Bulldozer CPUs, the machdep.cpu_idle_hlt sysctl s...https://bugs.dragonflybsd.org/issues/28282015-06-13T23:21:11Zvadaszi
<p>Power usage of a default install is unnecessarily high on current AMD CPUs. Setting the default value of the machdep.cpu_idle_hlt sysctl on these CPUs to 3 by default allows for significant power savings.</p>
<p>I'm not sure how setting machdep.cpu_idle_hlt=3 affects power usage on AMD Family 10h CPUs (e.g. Phenom CPUs).</p>
<p>Some quick benchmarking should be done if possible, to compare the performance difference.</p> DragonFlyBSD - Bug #2825 (New): 3x dhclient = hanging system (objcache exhausted)https://bugs.dragonflybsd.org/issues/28252015-06-09T13:59:46Zjaccovonb
<p>Starting dhclient 3 times causes the system to hang:</p>
<pre><code>Warning, objcache(mbuf pkt hdr + cluster): Exhausted!</code></pre>
<p>To reproduce, set the following in /etc/rc.conf:</p>
<p>ifconfig_em0="DHCP" <br />ifconfig_em1="DHCP" <br />ifconfig_em2="DHCP"</p> DragonFlyBSD - Bug #2736 (New): kernel panics on acpi_timer_probe functionhttps://bugs.dragonflybsd.org/issues/27362014-11-17T19:14:54Zcnbcneirabustos@gmail.com
<p>Booting with acpi module enable and hpet disabled in loader.conf results in a kernel panic, I'm unable to type anything in ddb as the the system is frozen<br />The hardware is a gobook vr-2 general dynamics itronix IX605 bios version 124. I'll update to bios version 125 and update the ticket if needed.</p> DragonFlyBSD - Bug #2735 (New): iwn panics SYSSASSERThttps://bugs.dragonflybsd.org/issues/27352014-11-14T22:04:08Zcnbcneirabustos@gmail.com
<p>iwn driver panics with SYSASSERT error that is described in this link<br /><a class="external" href="http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html">http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html</a></p> DragonFlyBSD - Bug #2638 (Feedback): Fix machdep.pmap_mmu_optimizehttps://bugs.dragonflybsd.org/issues/26382014-02-13T21:51:39Ztuxillo
<p>Fix machdep.pmap_mmu_optimize (currently off by default in commit 1ac5304a10366be7ed3129ceee7ca94beb0f3183 ). Affects apache and rtorrent for sure.</p>
<p>"might be fixed here: a44410dd8663abb121417692995d3b365f32fd6e<br />update: it's not fixed"</p> DragonFlyBSD - Bug #2499 (In Progress): DRAGONFLY_3_2 lockd not responding correctlyhttps://bugs.dragonflybsd.org/issues/24992013-01-22T08:41:27ZNerzhul
<p>Hello,<br />i must use lockd for concurrent access on a webserver with nfs extended storage. There is some concurrent access and lockd isn't responding correctly.</p>
<p>On the NFSv3 client, timeout appears and console logs:<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again</p>
<p>After "netstat -an -f inet" i see there is a queue on rpc socket</p>
<p>netstat -an -f inet</p>
<p>Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.977 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.611 ESTABLISHED<br />tcp4 0 0 localhost.smtp <strong>.* LISTEN<br />tcp4 0 0 *.ssh *.</strong> LISTEN<br />tcp4 0 0 <strong>.1017 *.</strong> CLOSED<br />tcp4 0 0 <strong>.1020 *.</strong> LISTEN<br />tcp4 0 0 <strong>.nfsd *.</strong> LISTEN<br />tcp4 0 0 <strong>.1023 *.</strong> LISTEN<br />tcp4 0 0 <strong>.1022 *.</strong> LISTEN<br />tcp4 0 0 <strong>.sunrpc *.</strong> LISTEN<br />tcp4 0 0 A.B.C.65.nfsd A.B.C.96.811 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.972 ESTABLISHED<br />tcp4 0 48 A.B.C.65.ssh 129.175.196.190.60067 ESTABLISHED<br />udp4 0 0 <strong>.918 *.</strong> <br />udp4 0 0 A.B.C.65.1028 ntp.u-psud.fr.ntp <br />udp4 456 0 <strong>.1017 *.</strong> <br />udp4 18656 0 <strong>.1018 *.</strong> <br />udp4 0 0 <strong>.nfsd *.</strong> <br />udp4 0 0 <strong>.1021 *.</strong> <br />udp4 0 0 <strong>.1020 *.</strong> <br />udp4 0 0 <strong>.1022 *.</strong> <br />udp4 0 0 <strong>.sunrpc *.</strong></p>
<p>When i see that, i make tcpdump -nni em0 to see what's happening:</p>
<p>22:12:42.781597 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:48.801935 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:54.669917 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:13:00.148965 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212</p>
<p>After a little time, lockd respond to all request, but many failed because of timeout</p>
<p>On the dragonflyBSD server i can see this in /var/log/messages</p>
<p>Jan 21 22:14:19 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:19 webfiler1 last message repeated 3 times<br />Jan 21 22:14:19 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.<br />Jan 21 22:14:29 webfiler1 dntpd<sup><a href="#fn571">571</a></sup>: issuing offset adjustment: 0.026637s<br />Jan 21 22:14:44 webfiler1 rpc.lockd: rpc to statd failed: RPC: Timed out<br />Jan 21 22:14:44 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:44 webfiler1 last message repeated 3 times<br />Jan 21 22:14:44 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.</p>
<p>I think there is a problem on DragonFlyBSD which queue many lockd requests.</p> DragonFlyBSD - Bug #2495 (New): DFBSD v3.3.0.960.g553fe7 - ocnt != 0" failed in prop_object_releasehttps://bugs.dragonflybsd.org/issues/24952013-01-21T15:41:55Ztuxillo
<p>Hi,</p>
<p>While trying to install recent master to a couple of striped disks using LVM, I get the following:</p>
<pre>
dm_target_stripe: Successfully initialized
dm_target_linear: Successfully initialized
disk scheduler: set policy of mapper/main-lv_swap to noop
disk scheduler: set policy of mapper/main-lv_root to noop
panic: assertion "ocnt != 0" failed in prop_object_release at /usr/src/sys/libprop/prop_object.c:1085
Backtrace:
#0 _get_mycpu () at ./machine/thread.h:69
#1 md_dumpsys (di=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/dump_machdep.c:265
#2 0xffffffff804f3bb2 in dumpsys () at /usr/src/sys/kern/kern_shutdown.c:913
#3 0xffffffff802a77ac in db_fncall (dummy1=<optimized out>, dummy2=<optimized out>, dummy3=<optimized out>, dummy4=<optimized out>)
at /usr/src/sys/ddb/db_command.c:539
#4 0xffffffff802a7c7f in db_command (aux_cmd_tablep_end=0xffffffff809e1bb8, aux_cmd_tablep=0xffffffff809e1b80,
cmd_table=<optimized out>, last_cmdp=<optimized out>) at /usr/src/sys/ddb/db_command.c:401
#5 db_command_loop () at /usr/src/sys/ddb/db_command.c:467
#6 0xffffffff802aab41 in db_trap (type=<optimized out>, code=<optimized out>) at /usr/src/sys/ddb/db_trap.c:71
#7 0xffffffff808ac758 in kdb_trap (type=<optimized out>, code=<optimized out>, regs=<optimized out>)
at /usr/src/sys/platform/pc64/x86_64/db_interface.c:174
#8 0xffffffff808b1a10 in trap_fatal (frame=0xffffffe0556cb2f8, eva=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/trap.c:1024
#9 0xffffffff808b2389 in trap (frame=0xffffffe0556cb2f8) at /usr/src/sys/platform/pc64/x86_64/trap.c:754
#10 0xffffffff8089c36f in calltrap () at /usr/src/sys/platform/pc64/x86_64/exception.S:188
#11 0xffffffff808ac549 in db_read_bytes (addr=282584257676679, size=8, data=0xffffffe0556cb3d8 "")
at /usr/src/sys/platform/pc64/x86_64/db_interface.c:240
#12 0xffffffff802a70ad in db_get_value (addr=282584257676679, size=8, is_signed=0) at /usr/src/sys/ddb/db_access.c:58
#13 0xffffffff808ad1e5 in db_nextframe (ip=<optimized out>, fp=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/db_trace.c:234
#14 db_stack_trace_cmd (addr=<optimized out>, have_addr=<optimized out>, count=<optimized out>, modif=<optimized out>)
at /usr/src/sys/platform/pc64/x86_64/db_trace.c:440
#15 0xffffffff808ad3a7 in print_backtrace (count=1433187288) at /usr/src/sys/platform/pc64/x86_64/db_trace.c:452
#16 0xffffffff804f4498 in panic (fmt=0xffffffff808f6540 "assertion \"%s\" failed in %s at %s:%u")
at /usr/src/sys/kern/kern_shutdown.c:812
#17 0xffffffff80766e47 in prop_object_release (obj=0xffffffe003410680) at /usr/src/sys/libprop/prop_object.c:1085
#18 0xffffffff804f9305 in udev_event_externalize (ev=<optimized out>) at /usr/src/sys/kern/kern_udev.c:496
#19 udev_dev_read (ap=<optimized out>) at /usr/src/sys/kern/kern_udev.c:745
#20 0xffffffff804d3aa7 in dev_dread (dev=0xffffffe03204ade0, uio=<optimized out>, ioflag=<optimized out>)
at /usr/src/sys/kern/kern_device.c:192
#21 0xffffffff806d5017 in devfs_fo_read (fp=0xffffffe021529f58, uio=0xffffffe0556cb998, cred=<optimized out>, flags=<optimized out>)
at /usr/src/sys/vfs/devfs/devfs_vnops.c:1176
#22 0xffffffff8052ab51 in fo_read (cred=<optimized out>, uio=<optimized out>, fp=<optimized out>, flags=<optimized out>)
at /usr/src/sys/sys/file2.h:57
#23 dofileread (res=<optimized out>, flags=<optimized out>, auio=<optimized out>, fp=<optimized out>, fd=<optimized out>)
at /usr/src/sys/kern/sys_generic.c:305
#24 kern_preadv (fd=3, auio=0xffffffe0556cb998, flags=0, res=<optimized out>) at /usr/src/sys/kern/sys_generic.c:269
#25 0xffffffff8052ad17 in sys_read (uap=0x0) at /usr/src/sys/kern/sys_generic.c:145
#26 0xffffffff808b2a2c in syscall2 (frame=0xffffffe0556cbab8) at /usr/src/sys/platform/pc64/x86_64/trap.c:1238
#27 0xffffffff8089c5bb in Xfast_syscall () at /usr/src/sys/platform/pc64/x86_64/exception.S:323
#28 0x000000000000002b in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
</pre><br />Core dumps available at: leaf:~tuxillo/*_ocount.1
<p>Cheers,<br />Antonio Huete</p> DragonFlyBSD - Bug #2423 (New): After multiple panics/locks, hitting KKASSERT in hammer_init_cursorhttps://bugs.dragonflybsd.org/issues/24232012-09-17T17:12:40Zrumcic
<p>After quite a few locks (machine stopped responding, nothing on serial console, unable to panic the machine), the hammer FS (v6) got a corrupted(?) UNDO.</p>
<p>Booting the latest snapshot and mounting with ro,noatime it looked as if it was able to rerun the UNDO/FIFO. But trying to mount the "clean" FS result in triping over a KKASSERT (error == 0 at hammer_cursor.c:202)</p> DragonFlyBSD - Bug #2141 (New): loader and/or documentation brokenhttps://bugs.dragonflybsd.org/issues/21412011-10-09T14:15:29Zsjg
<p>For example,</p>
<pre><code>The ehci driver is automatically loaded upon boot. To disable this<br /> behavior temporarily, the ehci_load variable can be unset at the loader<br /> prompt (see loader(8)). To disable it permanently, the<br /> hint.ehci.0.disabled tunable can be set to 1 in /boot/loader.conf.</code></pre>
<p>But when operating from the loader prompt the ehci_load variable has no effect <br />at all, it seems to only be checked from the menu, which is useless if you are <br />operating from the prompt.</p>
<p>This is confusing at best, but I am leaning more towards steaming pile. The <br />loader or the documentation needs to be reworked.</p> DragonFlyBSD - Bug #1921 (In Progress): we miss mlockallhttps://bugs.dragonflybsd.org/issues/19212010-11-24T16:19:21Zalexh
<p>We don't have the mlockall/munlockall syscalls as documented in [1]. We have at <br />least one tool in base that would benefit from it: cryptsetup. Hopefully someone <br />more familiar with the VM system can implement it without much effort as we <br />already have mlock/munlock.</p>
<p>Cheers,<br />Alex Hornung</p>
<p>[1]: <a class="external" href="http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html">http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html</a></p> DragonFlyBSD - Bug #1831 (Feedback): HAMMER "malloc limit exceeded" panichttps://bugs.dragonflybsd.org/issues/18312010-09-11T23:42:36Zeocallaghan
<p>I was able to reproduce with a hammer equivalent of issue1726 with the following<br />test case from vsrinivas in issue1726:</p>
<pre>
<code class="c syntaxhl" data-language="c"><span class="cp">#include</span> <span class="cpf"><unistd.h></span><span class="cp">
#include</span> <span class="cpf"><stdlib.h></span><span class="cp">
#include</span> <span class="cpf"><stdio.h></span><span class="cp">
</span>
<span class="n">main</span><span class="p">()</span> <span class="p">{</span>
<span class="kt">int</span> <span class="n">i</span><span class="p">;</span>
<span class="kt">char</span> <span class="n">id</span><span class="p">[</span><span class="mi">320</span><span class="p">]</span> <span class="o">=</span> <span class="p">{};</span>
<span class="k">for</span> <span class="p">(</span><span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="mi">10000000</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">sprintf</span><span class="p">(</span><span class="n">id</span><span class="p">,</span> <span class="s">"%09d"</span><span class="p">,</span> <span class="n">i</span><span class="p">);</span>
<span class="n">link</span><span class="p">(</span><span class="s">"sin.c"</span><span class="p">,</span> <span class="n">id</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code><br /></pre>
<p>----<br /><pre>
(kgdb) bt
#0 _get_mycpu (di=0xc06d4ca0) at ./machine/thread.h:83
#1 md_dumpsys (di=0xc06d4ca0)
at /usr/src/sys/platform/pc32/i386/dump_machdep.c:263
#2 0xc0304d15 in dumpsys () at /usr/src/sys/kern/kern_shutdown.c:880
#3 0xc03052d5 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:387
#4 0xc030559e in panic (fmt=0xc05bb41b "%s: malloc limit exceeded")
at /usr/src/sys/kern/kern_shutdown.c:786
#5 0xc03032bb in kmalloc (size=25, type=0xc1d8f590, flags=258)
at /usr/src/sys/kern/kern_slaballoc.c:503
#6 0xc04aa5a3 in hammer_alloc_mem_record (ip=0xcb803d50, data_len=25)
at /usr/src/sys/vfs/hammer/hammer_object.c:280
#7 0xc04aa91f in hammer_ip_add_directory (trans=0xce350ad4,
dip=0xcb803d50, name=0xd3cdb1d0 "000452457", bytes=9, ip=0xce31df50)
at /usr/src/sys/vfs/hammer/hammer_object.c:666
#8 0xc04bbf8a in hammer_vop_nlink (ap=0xce350b2c)
at /usr/src/sys/vfs/hammer/hammer_vnops.c:1388
#9 0xc036cc1f in vop_nlink_ap (ap=0xce350b2c)
at /usr/src/sys/kern/vfs_vopops.c:1978
#10 0xc03717ca in null_nlink (ap=0xce350b2c)
at /usr/src/sys/vfs/nullfs/null_vnops.c:164
#11 0xc036d465 in vop_nlink (ops=0xcdbbe030, nch=0xce350c48,
dvp=0xce0913e8, vp=0xce2f04e8, cred=0xcdef1738)
at /usr/src/sys/kern/vfs_vopops.c:1397
---Type <return> to continue, or q <return> to quit---
---Type <return> to continue, or q <return> to quit---#12 0xc0365496 in
kern_link (nd=0xce350c80, linknd=0xce350c48)
at /usr/src/sys/kern/vfs_syscalls.c:2320
#13 0xc036ad49 in sys_link (uap=0xce350cf0)
at /usr/src/sys/kern/vfs_syscalls.c:2345
#14 0xc055f6b3 in syscall2 (frame=0xce350d40)
at /usr/src/sys/platform/pc32/i386/trap.c:1310
#15 0xc0547fb6 in Xint0x80_syscall ()
at /usr/src/sys/platform/pc32/i386/exception.s:876
#16 0x0000001f in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(kgdb)
</pre></p>
<p>Dump on my leaf account;<br /><a class="external" href="http://leaf.dragonflybsd.org/~evocallaghan/hammer_vfs_panic.7z">http://leaf.dragonflybsd.org/~evocallaghan/hammer_vfs_panic.7z</a></p>
<p>Cheers,<br />Edward.</p> DragonFlyBSD - Bug #1198 (New): DDB loops panic in db_read_byteshttps://bugs.dragonflybsd.org/issues/11982009-01-05T22:50:04Zcorecode
<p>I have some panic which I can't debug because there is a flurry of panic<br />messages on my screen. The offender is:</p>
<p>sys/platform/pc32/i386/db_interface.c:208</p>
<p>I see that ddb uses longjmp, but seems that doesn't work here somehow.</p> DragonFlyBSD - Bug #884 (In Progress): Performance/memory problems under filesystem IO loadhttps://bugs.dragonflybsd.org/issues/8842007-12-14T18:34:28Zhasso
<p>During testing drive with dd I noticed that there are serious performance <br />problems. Programs which need disk access, block for 10 and more seconds. <br />Sometimes they don't continue the work until dd is finished. Raw disk access <br />(ie not writing to file, but directly to the disk) is reported to be OK (I <br />can't test it myself).</p>
<p>All tests are done with this command:<br />dd if=/dev/zero of=./file bs=4096k count=1000</p>
<p>Syncing after each dd helps to reproduce it more reliably (cache?).</p>
<p>There is one more strange thing in running these tests. I looked at memory <br />stats in top before and after running dd.</p>
<p>Before:<br />Mem: 42M Active, 40M Inact, 95M Wired, 304K Cache, 53M Buf, 795M Free<br />After:<br />Mem: 70M Active, 679M Inact, 175M Wired, 47M Cache, 109M Buf, 1752K Free</p>
<p>And as a side effect - I can't get my network interfaces up any more after <br />running dd - "em0: Could not setup receive strucutres".</p> DragonFlyBSD - Bug #599 (New): 1.9.0 reproducable panichttps://bugs.dragonflybsd.org/issues/5992007-04-11T10:24:26Zpavalos
<p>Here's a panic I'm getting with some pretty serious network (www) load, then <br />doing a netstat -an:</p>
<p>Unread portion of the kernel message buffer:<br />panic: m_copydata, negative off -1<br />mp_lock = 00000000; cpuid = 0; lapic.id = 00000000<br />boot() called on cpu#0</p>
<p>syncing disks... 5<br />done<br />Uptime: 12d22h0m32s</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc01954bb in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:370<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc01957c0 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/src/sys/kern/kern_shutdown.c:767<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc01c3a32 in m_copydata (m=0x0, off=0, len=0, cp=0xee9534b0 "\001\001<br />\b\n\006¦*$\035\bͬ") at /usr/src/sys/kern/uipc_mbuf.c:1014<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc020fc25 in tcp_output (tp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_output.c:690<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc02152bf in tcp_timer_persist (xtp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_timer.c:363<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01a6423 in softclock_handler (arg=0xc0386a80) <br />at /usr/src/sys/kern/kern_timeout.c:307<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc019d037 in lwkt_deschedule_self (td=Variable "td" is not available.<br />) at /usr/src/sys/kern/lwkt_thread.c:207<br />Previous frame inner to this frame (corrupt stack?)</p>
<p>The kernel and vmcore is being uploaded to leaf. The source is from March 28.</p>
<p>--Peter</p>