DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082017-12-17T19:17:50ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3113 (In Progress): Booting vKernel fails due being out of swap spacehttps://bugs.dragonflybsd.org/issues/31132017-12-17T19:17:50Ztcullen
<p>The step by step directions at <a class="external" href="https://www.dragonflybsd.org/docs/handbook/vkernel/">https://www.dragonflybsd.org/docs/handbook/vkernel/</a> fail because when you attempt to boot the vkernel you never get a shell prompt because the shell is instantly and repeatedly killed due to a never ending stream of "out of swap space" error messages.</p> DragonFlyBSD - Bug #2735 (New): iwn panics SYSSASSERThttps://bugs.dragonflybsd.org/issues/27352014-11-14T22:04:08Zcnbcneirabustos@gmail.com
<p>iwn driver panics with SYSASSERT error that is described in this link<br /><a class="external" href="http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html">http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html</a></p> DragonFlyBSD - Bug #2499 (In Progress): DRAGONFLY_3_2 lockd not responding correctlyhttps://bugs.dragonflybsd.org/issues/24992013-01-22T08:41:27ZNerzhul
<p>Hello,<br />i must use lockd for concurrent access on a webserver with nfs extended storage. There is some concurrent access and lockd isn't responding correctly.</p>
<p>On the NFSv3 client, timeout appears and console logs:<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again</p>
<p>After "netstat -an -f inet" i see there is a queue on rpc socket</p>
<p>netstat -an -f inet</p>
<p>Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.977 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.611 ESTABLISHED<br />tcp4 0 0 localhost.smtp <strong>.* LISTEN<br />tcp4 0 0 *.ssh *.</strong> LISTEN<br />tcp4 0 0 <strong>.1017 *.</strong> CLOSED<br />tcp4 0 0 <strong>.1020 *.</strong> LISTEN<br />tcp4 0 0 <strong>.nfsd *.</strong> LISTEN<br />tcp4 0 0 <strong>.1023 *.</strong> LISTEN<br />tcp4 0 0 <strong>.1022 *.</strong> LISTEN<br />tcp4 0 0 <strong>.sunrpc *.</strong> LISTEN<br />tcp4 0 0 A.B.C.65.nfsd A.B.C.96.811 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.972 ESTABLISHED<br />tcp4 0 48 A.B.C.65.ssh 129.175.196.190.60067 ESTABLISHED<br />udp4 0 0 <strong>.918 *.</strong> <br />udp4 0 0 A.B.C.65.1028 ntp.u-psud.fr.ntp <br />udp4 456 0 <strong>.1017 *.</strong> <br />udp4 18656 0 <strong>.1018 *.</strong> <br />udp4 0 0 <strong>.nfsd *.</strong> <br />udp4 0 0 <strong>.1021 *.</strong> <br />udp4 0 0 <strong>.1020 *.</strong> <br />udp4 0 0 <strong>.1022 *.</strong> <br />udp4 0 0 <strong>.sunrpc *.</strong></p>
<p>When i see that, i make tcpdump -nni em0 to see what's happening:</p>
<p>22:12:42.781597 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:48.801935 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:54.669917 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:13:00.148965 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212</p>
<p>After a little time, lockd respond to all request, but many failed because of timeout</p>
<p>On the dragonflyBSD server i can see this in /var/log/messages</p>
<p>Jan 21 22:14:19 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:19 webfiler1 last message repeated 3 times<br />Jan 21 22:14:19 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.<br />Jan 21 22:14:29 webfiler1 dntpd<sup><a href="#fn571">571</a></sup>: issuing offset adjustment: 0.026637s<br />Jan 21 22:14:44 webfiler1 rpc.lockd: rpc to statd failed: RPC: Timed out<br />Jan 21 22:14:44 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:44 webfiler1 last message repeated 3 times<br />Jan 21 22:14:44 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.</p>
<p>I think there is a problem on DragonFlyBSD which queue many lockd requests.</p> DragonFlyBSD - Bug #2423 (New): After multiple panics/locks, hitting KKASSERT in hammer_init_cursorhttps://bugs.dragonflybsd.org/issues/24232012-09-17T17:12:40Zrumcic
<p>After quite a few locks (machine stopped responding, nothing on serial console, unable to panic the machine), the hammer FS (v6) got a corrupted(?) UNDO.</p>
<p>Booting the latest snapshot and mounting with ro,noatime it looked as if it was able to rerun the UNDO/FIFO. But trying to mount the "clean" FS result in triping over a KKASSERT (error == 0 at hammer_cursor.c:202)</p> DragonFlyBSD - Bug #2141 (New): loader and/or documentation brokenhttps://bugs.dragonflybsd.org/issues/21412011-10-09T14:15:29Zsjg
<p>For example,</p>
<pre><code>The ehci driver is automatically loaded upon boot. To disable this<br /> behavior temporarily, the ehci_load variable can be unset at the loader<br /> prompt (see loader(8)). To disable it permanently, the<br /> hint.ehci.0.disabled tunable can be set to 1 in /boot/loader.conf.</code></pre>
<p>But when operating from the loader prompt the ehci_load variable has no effect <br />at all, it seems to only be checked from the menu, which is useless if you are <br />operating from the prompt.</p>
<p>This is confusing at best, but I am leaning more towards steaming pile. The <br />loader or the documentation needs to be reworked.</p> DragonFlyBSD - Bug #2140 (New): hammer_io_delallocate panic with 'duplicate entry' messagehttps://bugs.dragonflybsd.org/issues/21402011-10-06T20:04:53Zttw
<p>Getting regular panics on HAMMER FSs when transferring large quantities of data <br />over the network. initally saw the issue with NFS but eliminated that and now <br />seeing it with 'tar' piped across 'ssh'. was most prevelant when using USB or <br />FW storage but most recent test to internal IDE drive has thrown up the same <br />error.</p>
<p>NB: transfers are generally across unreliable links (i.e. wireless) and oft-<br />times, long stalls (i.e. minutes) occur -- most recent crash was after i had <br />paused a 'tar/ssh' transfer using CTL-Z.</p>
<p>NB: turning on crashdumps and re-executing tests but may take hours to fail - <br />any pointers in the meantime would be excellent.</p> DragonFlyBSD - Bug #2117 (New): ACPI and/or bce(4) problem with 2.11.0.673.g0d557 on HP DL380 G6https://bugs.dragonflybsd.org/issues/21172011-08-18T16:40:24Zpauska
<p>I got a standard HP Proliant DL380 G6 server with a built-in quad broadcom NIC.</p>
<p>2.10 didn't have the updated bcn drivers, so I installed the 2.11.0.673 snapshot <br />to get connectivity.</p>
<p>First, the ACPI error (also present in 2.10):<br />[ACPI Debug] String [0xB] "_TMP Method"</p>
<p>This message repeats 60 times every 10 minutes. I have no idea what it means, <br />googling for it only points me at a NetBSD discussion from 2009.</p>
<p>Secondly, the bcn driver (or perhaps atapci?):<br />interrupt total rate<br />sio2 0 0<br />sio0 0 0<br />acpi0 12125 0<br />bce0 1547359 26<br />bce1/atapci0 2293301893 39875 <-- ouch?<br />bce2 0 0<br />bce3 0 0<br />uhci0/ehci0 1 0<br />uhci2/uhci4 34 0<br />uhci1/uhci3 44 0<br />ciss0 267683 4<br />swi_siopoll 0 0<br />swi_cambio 267762 4<br />swi_vm 0 0<br />swi_taskq/swi_mp_taskq 25 0<br />Total 2295396926 39911</p>
<p>The weird part is that I dont have any ATA devices in use, there's only a CD-<br />rom. bcn1 isnt configured or marked up, only bcn0 is in use.</p>
<p>The deal breaker here is that I can't do anything disk intensive without getting <br />a crash. I tried updating pkgsrc yesterday, and here are two examples:</p>
[snip]
* [new branch] dragonfly-2010Q3 -> origin/dragonfly-2010Q3
<ul>
<li>Signal 10<br />Stop in /usr.<br />[snip]</li>
</ul>
[snip]
* [new branch] master -> origin/master<br />Bus error (core dumped)
<ul>
<li>Error code 1<br />Stop in /usr.<br />[snip]</li>
</ul>
<p>While getting these errors messages like this flooded dmesg:<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />[ACPI Debug] String [0xB] "_TMP Method" <br />intr 16 at 882/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />pid 34805 (git), uid 0: exited on signal 10 (core dumped)<br />intr 16 at 3225/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />intr 16 at 751/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />intr 16 at 765/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />intr 16 at 795/20000 hz, livelock removed<br />[ACPI Debug] String [0xB] "_TMP Method"</p>
<p>I'm not familiar with debugging this, so please let me know if you need more <br />info. I can also put the server in the DMZ and give a developer SSH access if <br />needed.</p> DragonFlyBSD - Bug #2071 (New): Panic on assertion: (int)(flg->seq - seq) > 0 in hammer_flusher_f...https://bugs.dragonflybsd.org/issues/20712011-05-14T19:20:34Zvsrinivasvsrinivas@ops101.org
<p>ad10: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=1500424384<br />ad10: FAILURE - WRITE_DMA48 status=51<READY,DSC,ERROR> error=10<NID_NOT_FOUND><br />LBA=1500424384<br /><abbr title="NEWTANK">HAMMER</abbr>: Critical error inode=-1 error=5 while flushing meta-data<br /><abbr title="NEWTANK">HAMMER</abbr>: Forcing read-only mode<br />hammer: debug: forcing async flush ip 000000032433ccec<br />hammer: debug: forcing async flush ip 000000032433ccec<br />panic: assertion: (int)(flg->seq - seq) > 0 in hammer_flusher_flush<br />Trace beginning at frame 0xcbfadd10<br />panic(ffffffff,c07ace20,c064b39b,cbfadd40,cbfa00fc) at panic+0x101<br />panic(c064b39b,c067760e,c06353b6,0,cbfa0108) at panic+0x101<br />hammer_flusher_master_thread(cbfa0000,0,0,0,0) at hammer_flusher_master_thread+0x14e<br />lwkt_exit() at lwkt_exit<br />Uptime: 5d16h20m44s<br />Physical memory: 502 MB<br />Dumping 205 MB: 190 174 158 142 126 110 94 78 62 46 30 14</p>
<p>After an inode failure, HAMMER paniced on an assertion in hammer_flusher_flush.<br />ad10 was a WD EARS 1.5TB disk attached to a Silicon Image 3114 controller,<br />running via nata. The only I/O to the disk was a mirror-stream.</p>
<p>core and kernel are on leaf.dragonflybsd.org, /home/vsrinivas/hammerpanic.</p> DragonFlyBSD - Bug #1921 (In Progress): we miss mlockallhttps://bugs.dragonflybsd.org/issues/19212010-11-24T16:19:21Zalexh
<p>We don't have the mlockall/munlockall syscalls as documented in [1]. We have at <br />least one tool in base that would benefit from it: cryptsetup. Hopefully someone <br />more familiar with the VM system can implement it without much effort as we <br />already have mlock/munlock.</p>
<p>Cheers,<br />Alex Hornung</p>
<p>[1]: <a class="external" href="http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html">http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html</a></p> DragonFlyBSD - Bug #1920 (New): system hangshttps://bugs.dragonflybsd.org/issues/19202010-11-22T16:59:00Zzhtwroot@zta.lk
<p>System hangs once in a while leaving on the console these messages:</p>
<p>====<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed</p>
<p>panic: IP header no in one mbuf<br />mp_lock = ffffffff; cpuid=1<br />Trace beginning at frame 0xffffffe03d9b0a60<br />panic() at panic+0x239<br />panic() at panic+0x239<br />ip_input() at ip_input+0x153<br />ip_input_handler() at ip_input_handler+0xd<br />netmsg_service_lookp() at netmsg_service_loop_0x6f<br />Debugger("panic")</p>
<p>CPU1 stopping CPU: 0x00000001<br />stopped<br />Stopped at Debugger+0x39 movh $0,0x387060(%rid)
====</p>
<p>I couldn't grub this from the screen and copied it by hands, so there can be typos.</p>
<ol>
<li>uname -a<br />DragonFly chinua.zzz.umc8.ru 2.8-RELEASE DragonFly v2.8.2.38.gb1139-RELEASE <a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a>:<br />Mon Nov 22 10:06:11 MSK 2010 <br /><a class="email" href="mailto:root@chinua.zzz.umc8.ru">root@chinua.zzz.umc8.ru</a>:/usr/obj/usr/src/sys/X86_64_GENERIC_SMP x86_64</li>
</ol>
<p>The dmesg.boot is in attach</p> DragonFlyBSD - Bug #1831 (Feedback): HAMMER "malloc limit exceeded" panichttps://bugs.dragonflybsd.org/issues/18312010-09-11T23:42:36Zeocallaghan
<p>I was able to reproduce with a hammer equivalent of issue1726 with the following<br />test case from vsrinivas in issue1726:</p>
<pre>
<code class="c syntaxhl" data-language="c"><span class="cp">#include</span> <span class="cpf"><unistd.h></span><span class="cp">
#include</span> <span class="cpf"><stdlib.h></span><span class="cp">
#include</span> <span class="cpf"><stdio.h></span><span class="cp">
</span>
<span class="n">main</span><span class="p">()</span> <span class="p">{</span>
<span class="kt">int</span> <span class="n">i</span><span class="p">;</span>
<span class="kt">char</span> <span class="n">id</span><span class="p">[</span><span class="mi">320</span><span class="p">]</span> <span class="o">=</span> <span class="p">{};</span>
<span class="k">for</span> <span class="p">(</span><span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="mi">10000000</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">sprintf</span><span class="p">(</span><span class="n">id</span><span class="p">,</span> <span class="s">"%09d"</span><span class="p">,</span> <span class="n">i</span><span class="p">);</span>
<span class="n">link</span><span class="p">(</span><span class="s">"sin.c"</span><span class="p">,</span> <span class="n">id</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code><br /></pre>
<p>----<br /><pre>
(kgdb) bt
#0 _get_mycpu (di=0xc06d4ca0) at ./machine/thread.h:83
#1 md_dumpsys (di=0xc06d4ca0)
at /usr/src/sys/platform/pc32/i386/dump_machdep.c:263
#2 0xc0304d15 in dumpsys () at /usr/src/sys/kern/kern_shutdown.c:880
#3 0xc03052d5 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:387
#4 0xc030559e in panic (fmt=0xc05bb41b "%s: malloc limit exceeded")
at /usr/src/sys/kern/kern_shutdown.c:786
#5 0xc03032bb in kmalloc (size=25, type=0xc1d8f590, flags=258)
at /usr/src/sys/kern/kern_slaballoc.c:503
#6 0xc04aa5a3 in hammer_alloc_mem_record (ip=0xcb803d50, data_len=25)
at /usr/src/sys/vfs/hammer/hammer_object.c:280
#7 0xc04aa91f in hammer_ip_add_directory (trans=0xce350ad4,
dip=0xcb803d50, name=0xd3cdb1d0 "000452457", bytes=9, ip=0xce31df50)
at /usr/src/sys/vfs/hammer/hammer_object.c:666
#8 0xc04bbf8a in hammer_vop_nlink (ap=0xce350b2c)
at /usr/src/sys/vfs/hammer/hammer_vnops.c:1388
#9 0xc036cc1f in vop_nlink_ap (ap=0xce350b2c)
at /usr/src/sys/kern/vfs_vopops.c:1978
#10 0xc03717ca in null_nlink (ap=0xce350b2c)
at /usr/src/sys/vfs/nullfs/null_vnops.c:164
#11 0xc036d465 in vop_nlink (ops=0xcdbbe030, nch=0xce350c48,
dvp=0xce0913e8, vp=0xce2f04e8, cred=0xcdef1738)
at /usr/src/sys/kern/vfs_vopops.c:1397
---Type <return> to continue, or q <return> to quit---
---Type <return> to continue, or q <return> to quit---#12 0xc0365496 in
kern_link (nd=0xce350c80, linknd=0xce350c48)
at /usr/src/sys/kern/vfs_syscalls.c:2320
#13 0xc036ad49 in sys_link (uap=0xce350cf0)
at /usr/src/sys/kern/vfs_syscalls.c:2345
#14 0xc055f6b3 in syscall2 (frame=0xce350d40)
at /usr/src/sys/platform/pc32/i386/trap.c:1310
#15 0xc0547fb6 in Xint0x80_syscall ()
at /usr/src/sys/platform/pc32/i386/exception.s:876
#16 0x0000001f in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(kgdb)
</pre></p>
<p>Dump on my leaf account;<br /><a class="external" href="http://leaf.dragonflybsd.org/~evocallaghan/hammer_vfs_panic.7z">http://leaf.dragonflybsd.org/~evocallaghan/hammer_vfs_panic.7z</a></p>
<p>Cheers,<br />Edward.</p> DragonFlyBSD - Bug #1198 (New): DDB loops panic in db_read_byteshttps://bugs.dragonflybsd.org/issues/11982009-01-05T22:50:04Zcorecode
<p>I have some panic which I can't debug because there is a flurry of panic<br />messages on my screen. The offender is:</p>
<p>sys/platform/pc32/i386/db_interface.c:208</p>
<p>I see that ddb uses longjmp, but seems that doesn't work here somehow.</p> DragonFlyBSD - Bug #1185 (New): need a tool to merge changes into /etchttps://bugs.dragonflybsd.org/issues/11852008-12-20T07:47:08Zwa1ter
<p>I run mergemaster occasionally because it catches files that<br />make upgrade misses. Today it misses about half a dozen or<br />so, including aliases, ftpusers, /usr/Makefile, and maybe a<br />few others I can't remember.</p>
<p>Thanks.</p> DragonFlyBSD - Bug #884 (In Progress): Performance/memory problems under filesystem IO loadhttps://bugs.dragonflybsd.org/issues/8842007-12-14T18:34:28Zhasso
<p>During testing drive with dd I noticed that there are serious performance <br />problems. Programs which need disk access, block for 10 and more seconds. <br />Sometimes they don't continue the work until dd is finished. Raw disk access <br />(ie not writing to file, but directly to the disk) is reported to be OK (I <br />can't test it myself).</p>
<p>All tests are done with this command:<br />dd if=/dev/zero of=./file bs=4096k count=1000</p>
<p>Syncing after each dd helps to reproduce it more reliably (cache?).</p>
<p>There is one more strange thing in running these tests. I looked at memory <br />stats in top before and after running dd.</p>
<p>Before:<br />Mem: 42M Active, 40M Inact, 95M Wired, 304K Cache, 53M Buf, 795M Free<br />After:<br />Mem: 70M Active, 679M Inact, 175M Wired, 47M Cache, 109M Buf, 1752K Free</p>
<p>And as a side effect - I can't get my network interfaces up any more after <br />running dd - "em0: Could not setup receive strucutres".</p> DragonFlyBSD - Bug #599 (New): 1.9.0 reproducable panichttps://bugs.dragonflybsd.org/issues/5992007-04-11T10:24:26Zpavalos
<p>Here's a panic I'm getting with some pretty serious network (www) load, then <br />doing a netstat -an:</p>
<p>Unread portion of the kernel message buffer:<br />panic: m_copydata, negative off -1<br />mp_lock = 00000000; cpuid = 0; lapic.id = 00000000<br />boot() called on cpu#0</p>
<p>syncing disks... 5<br />done<br />Uptime: 12d22h0m32s</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc01954bb in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:370<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc01957c0 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/src/sys/kern/kern_shutdown.c:767<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc01c3a32 in m_copydata (m=0x0, off=0, len=0, cp=0xee9534b0 "\001\001<br />\b\n\006¦*$\035\bͬ") at /usr/src/sys/kern/uipc_mbuf.c:1014<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc020fc25 in tcp_output (tp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_output.c:690<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc02152bf in tcp_timer_persist (xtp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_timer.c:363<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01a6423 in softclock_handler (arg=0xc0386a80) <br />at /usr/src/sys/kern/kern_timeout.c:307<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc019d037 in lwkt_deschedule_self (td=Variable "td" is not available.<br />) at /usr/src/sys/kern/lwkt_thread.c:207<br />Previous frame inner to this frame (corrupt stack?)</p>
<p>The kernel and vmcore is being uploaded to leaf. The source is from March 28.</p>
<p>--Peter</p>