DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082021-03-14T16:43:03ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3266 (New): Filesystems broken due to "KKASSERT(count & TOK_COUNTMASK);"https://bugs.dragonflybsd.org/issues/32662021-03-14T16:43:03Ztkusumikusumi.tomohiro@gmail.com
<p>Many fs including HAMMER2 are broken due to this assert failure.<br />Confirmed the panic with HAMMER2 and ext2.<br />It didn't happen a few months ago.</p>
<p>433 static __inline<br />434 void<br />435 _lwkt_reltokref(lwkt_tokref_t ref, thread_t td)<br />436 {<br />...<br />454 /*<br />455 * We are a shared holder<br />456 <strong>/<br />457 count = atomic_fetchadd_long(&tok->t_count, <del>TOK_INCR);<br />458 <abbr title="count & TOK_COUNTMASK">KKASSERT</abbr>; /</strong> count prior */ <----------</del><br />459 }<br />460 }</p> DragonFlyBSD - Bug #3205 (Feedback): Go compiler net test failinghttps://bugs.dragonflybsd.org/issues/32052019-09-18T13:28:07ZAnonymous
<p>A recent commit appears to have broken the net test for the Go compiler:<br /><a class="external" href="https://build.golang.org/log/58be31cfd1a92ba9582fdf33e01f79e03184e59b">https://build.golang.org/log/58be31cfd1a92ba9582fdf33e01f79e03184e59b</a></p>
<p>This was working on commit be02f354 and started failing when I upgraded to b7d3e1.</p> DragonFlyBSD - Bug #3194 (New): Hammer kernel crash on mirror-stream of PFS after upgrade (assert...https://bugs.dragonflybsd.org/issues/31942019-06-29T19:53:41ZAnonymous
<p>I operate two HAMMER arrays, each with PFSs that are mirrored via ``hammer mirror-stream`` from an array master to an array slave.</p>
<p>I recently upgraded to DragonFly 5.6.</p>
<p>After 18 hours of activity, I encountered a kernel fault that reliably can be provoked by running ``hammer mirror-stream`` on one PFS.</p>
<p>I've disabled PFS mirroring until this can be corrected.</p>
<p>Here is the transcribed error from the kernel fault:</p>
<p>```<br />panic: assertion "cursor->flags & HAMMER_CURSOR_ITERATE_CHECK" failed in hammer_btree_iterate at /usr/src/sys/vfs/hammer/hammer_btree.c:263<br />cpuid = 2<br />Trace beginning at frame 0xfffff801e6875158<br />hammer_btree_iterate() at hammer_btree_iterate+0x839 0xffffffff80900db9<br />hammer_btree_iterate() at hammer_btree_iterate+0x839 0xffffffff80900db9<br />hammer_mirror_delete_to.irsra.2() at hammer_mirror_delete_to.isra.2+0x12 0xffffffff80914d92<br />hammer_ioc_mirror_write() at hammer_ioc_mirror_write+0x3b7 0xffffffff809159a7<br />hammer_ioctl() at hammer_ioctl+0xebe 0xffffffff809143de<br />hammer_vop_ioctl() at hammer_vop_ioctl+0x48 0xffffffff8092d458<br />Debugger("panic")</p>
<p>CPU2 stopping CPUs: 0x0000000b<br /> stopped<br />Stopped at Debugger+0x7c: movb $0,0xfc9be9(%rip)<br />db> <br />```</p>
<p>Unfortunately, nothing was saved to /var/crash for this fault except an empty ``kern.0`` file.</p> DragonFlyBSD - Bug #3113 (In Progress): Booting vKernel fails due being out of swap spacehttps://bugs.dragonflybsd.org/issues/31132017-12-17T19:17:50Ztcullen
<p>The step by step directions at <a class="external" href="https://www.dragonflybsd.org/docs/handbook/vkernel/">https://www.dragonflybsd.org/docs/handbook/vkernel/</a> fail because when you attempt to boot the vkernel you never get a shell prompt because the shell is instantly and repeatedly killed due to a never ending stream of "out of swap space" error messages.</p> DragonFlyBSD - Bug #2870 (New): Broken text and icons when glamor acceleration is usedhttps://bugs.dragonflybsd.org/issues/28702015-12-19T15:09:16Z375gnu
<p>I'm using DF 4.4.1 with radeonkms, my videocard is radeon 7750. When I disable 2D acceleration in xorg.conf then output is correct, but when I enable it I see garbage, the more applications is run the more garbage I see, letters in gtk2/3 applications are broken, icons are black.</p>
<p>It depends on number of application started, i.e. when I use startx (twm + 3 xterm) and start 1 gtk application then it has only icons broken, qt applications are clean, but I use startxfce then everything is broken.</p>
<p>It does not matter what 3D (mesa) driver is used, radeonsi or llvmpipe.</p> DragonFlyBSD - Bug #2828 (New): On AMD APUs and Bulldozer CPUs, the machdep.cpu_idle_hlt sysctl s...https://bugs.dragonflybsd.org/issues/28282015-06-13T23:21:11Zvadaszi
<p>Power usage of a default install is unnecessarily high on current AMD CPUs. Setting the default value of the machdep.cpu_idle_hlt sysctl on these CPUs to 3 by default allows for significant power savings.</p>
<p>I'm not sure how setting machdep.cpu_idle_hlt=3 affects power usage on AMD Family 10h CPUs (e.g. Phenom CPUs).</p>
<p>Some quick benchmarking should be done if possible, to compare the performance difference.</p> DragonFlyBSD - Bug #2825 (New): 3x dhclient = hanging system (objcache exhausted)https://bugs.dragonflybsd.org/issues/28252015-06-09T13:59:46Zjaccovonb
<p>Starting dhclient 3 times causes the system to hang:</p>
<pre><code>Warning, objcache(mbuf pkt hdr + cluster): Exhausted!</code></pre>
<p>To reproduce, set the following in /etc/rc.conf:</p>
<p>ifconfig_em0="DHCP" <br />ifconfig_em1="DHCP" <br />ifconfig_em2="DHCP"</p> DragonFlyBSD - Bug #2735 (New): iwn panics SYSSASSERThttps://bugs.dragonflybsd.org/issues/27352014-11-14T22:04:08Zcnbcneirabustos@gmail.com
<p>iwn driver panics with SYSASSERT error that is described in this link<br /><a class="external" href="http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html">http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html</a></p> DragonFlyBSD - Bug #2499 (In Progress): DRAGONFLY_3_2 lockd not responding correctlyhttps://bugs.dragonflybsd.org/issues/24992013-01-22T08:41:27ZNerzhul
<p>Hello,<br />i must use lockd for concurrent access on a webserver with nfs extended storage. There is some concurrent access and lockd isn't responding correctly.</p>
<p>On the NFSv3 client, timeout appears and console logs:<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again</p>
<p>After "netstat -an -f inet" i see there is a queue on rpc socket</p>
<p>netstat -an -f inet</p>
<p>Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.977 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.611 ESTABLISHED<br />tcp4 0 0 localhost.smtp <strong>.* LISTEN<br />tcp4 0 0 *.ssh *.</strong> LISTEN<br />tcp4 0 0 <strong>.1017 *.</strong> CLOSED<br />tcp4 0 0 <strong>.1020 *.</strong> LISTEN<br />tcp4 0 0 <strong>.nfsd *.</strong> LISTEN<br />tcp4 0 0 <strong>.1023 *.</strong> LISTEN<br />tcp4 0 0 <strong>.1022 *.</strong> LISTEN<br />tcp4 0 0 <strong>.sunrpc *.</strong> LISTEN<br />tcp4 0 0 A.B.C.65.nfsd A.B.C.96.811 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.972 ESTABLISHED<br />tcp4 0 48 A.B.C.65.ssh 129.175.196.190.60067 ESTABLISHED<br />udp4 0 0 <strong>.918 *.</strong> <br />udp4 0 0 A.B.C.65.1028 ntp.u-psud.fr.ntp <br />udp4 456 0 <strong>.1017 *.</strong> <br />udp4 18656 0 <strong>.1018 *.</strong> <br />udp4 0 0 <strong>.nfsd *.</strong> <br />udp4 0 0 <strong>.1021 *.</strong> <br />udp4 0 0 <strong>.1020 *.</strong> <br />udp4 0 0 <strong>.1022 *.</strong> <br />udp4 0 0 <strong>.sunrpc *.</strong></p>
<p>When i see that, i make tcpdump -nni em0 to see what's happening:</p>
<p>22:12:42.781597 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:48.801935 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:54.669917 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:13:00.148965 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212</p>
<p>After a little time, lockd respond to all request, but many failed because of timeout</p>
<p>On the dragonflyBSD server i can see this in /var/log/messages</p>
<p>Jan 21 22:14:19 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:19 webfiler1 last message repeated 3 times<br />Jan 21 22:14:19 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.<br />Jan 21 22:14:29 webfiler1 dntpd<sup><a href="#fn571">571</a></sup>: issuing offset adjustment: 0.026637s<br />Jan 21 22:14:44 webfiler1 rpc.lockd: rpc to statd failed: RPC: Timed out<br />Jan 21 22:14:44 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:44 webfiler1 last message repeated 3 times<br />Jan 21 22:14:44 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.</p>
<p>I think there is a problem on DragonFlyBSD which queue many lockd requests.</p> DragonFlyBSD - Bug #2423 (New): After multiple panics/locks, hitting KKASSERT in hammer_init_cursorhttps://bugs.dragonflybsd.org/issues/24232012-09-17T17:12:40Zrumcic
<p>After quite a few locks (machine stopped responding, nothing on serial console, unable to panic the machine), the hammer FS (v6) got a corrupted(?) UNDO.</p>
<p>Booting the latest snapshot and mounting with ro,noatime it looked as if it was able to rerun the UNDO/FIFO. But trying to mount the "clean" FS result in triping over a KKASSERT (error == 0 at hammer_cursor.c:202)</p> DragonFlyBSD - Bug #2141 (New): loader and/or documentation brokenhttps://bugs.dragonflybsd.org/issues/21412011-10-09T14:15:29Zsjg
<p>For example,</p>
<pre><code>The ehci driver is automatically loaded upon boot. To disable this<br /> behavior temporarily, the ehci_load variable can be unset at the loader<br /> prompt (see loader(8)). To disable it permanently, the<br /> hint.ehci.0.disabled tunable can be set to 1 in /boot/loader.conf.</code></pre>
<p>But when operating from the loader prompt the ehci_load variable has no effect <br />at all, it seems to only be checked from the menu, which is useless if you are <br />operating from the prompt.</p>
<p>This is confusing at best, but I am leaning more towards steaming pile. The <br />loader or the documentation needs to be reworked.</p> DragonFlyBSD - Bug #1921 (In Progress): we miss mlockallhttps://bugs.dragonflybsd.org/issues/19212010-11-24T16:19:21Zalexh
<p>We don't have the mlockall/munlockall syscalls as documented in [1]. We have at <br />least one tool in base that would benefit from it: cryptsetup. Hopefully someone <br />more familiar with the VM system can implement it without much effort as we <br />already have mlock/munlock.</p>
<p>Cheers,<br />Alex Hornung</p>
<p>[1]: <a class="external" href="http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html">http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html</a></p> DragonFlyBSD - Bug #1831 (Feedback): HAMMER "malloc limit exceeded" panichttps://bugs.dragonflybsd.org/issues/18312010-09-11T23:42:36Zeocallaghan
<p>I was able to reproduce with a hammer equivalent of issue1726 with the following<br />test case from vsrinivas in issue1726:</p>
<pre>
<code class="c syntaxhl" data-language="c"><span class="cp">#include</span> <span class="cpf"><unistd.h></span><span class="cp">
#include</span> <span class="cpf"><stdlib.h></span><span class="cp">
#include</span> <span class="cpf"><stdio.h></span><span class="cp">
</span>
<span class="n">main</span><span class="p">()</span> <span class="p">{</span>
<span class="kt">int</span> <span class="n">i</span><span class="p">;</span>
<span class="kt">char</span> <span class="n">id</span><span class="p">[</span><span class="mi">320</span><span class="p">]</span> <span class="o">=</span> <span class="p">{};</span>
<span class="k">for</span> <span class="p">(</span><span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="mi">10000000</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">sprintf</span><span class="p">(</span><span class="n">id</span><span class="p">,</span> <span class="s">"%09d"</span><span class="p">,</span> <span class="n">i</span><span class="p">);</span>
<span class="n">link</span><span class="p">(</span><span class="s">"sin.c"</span><span class="p">,</span> <span class="n">id</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code><br /></pre>
<p>----<br /><pre>
(kgdb) bt
#0 _get_mycpu (di=0xc06d4ca0) at ./machine/thread.h:83
#1 md_dumpsys (di=0xc06d4ca0)
at /usr/src/sys/platform/pc32/i386/dump_machdep.c:263
#2 0xc0304d15 in dumpsys () at /usr/src/sys/kern/kern_shutdown.c:880
#3 0xc03052d5 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:387
#4 0xc030559e in panic (fmt=0xc05bb41b "%s: malloc limit exceeded")
at /usr/src/sys/kern/kern_shutdown.c:786
#5 0xc03032bb in kmalloc (size=25, type=0xc1d8f590, flags=258)
at /usr/src/sys/kern/kern_slaballoc.c:503
#6 0xc04aa5a3 in hammer_alloc_mem_record (ip=0xcb803d50, data_len=25)
at /usr/src/sys/vfs/hammer/hammer_object.c:280
#7 0xc04aa91f in hammer_ip_add_directory (trans=0xce350ad4,
dip=0xcb803d50, name=0xd3cdb1d0 "000452457", bytes=9, ip=0xce31df50)
at /usr/src/sys/vfs/hammer/hammer_object.c:666
#8 0xc04bbf8a in hammer_vop_nlink (ap=0xce350b2c)
at /usr/src/sys/vfs/hammer/hammer_vnops.c:1388
#9 0xc036cc1f in vop_nlink_ap (ap=0xce350b2c)
at /usr/src/sys/kern/vfs_vopops.c:1978
#10 0xc03717ca in null_nlink (ap=0xce350b2c)
at /usr/src/sys/vfs/nullfs/null_vnops.c:164
#11 0xc036d465 in vop_nlink (ops=0xcdbbe030, nch=0xce350c48,
dvp=0xce0913e8, vp=0xce2f04e8, cred=0xcdef1738)
at /usr/src/sys/kern/vfs_vopops.c:1397
---Type <return> to continue, or q <return> to quit---
---Type <return> to continue, or q <return> to quit---#12 0xc0365496 in
kern_link (nd=0xce350c80, linknd=0xce350c48)
at /usr/src/sys/kern/vfs_syscalls.c:2320
#13 0xc036ad49 in sys_link (uap=0xce350cf0)
at /usr/src/sys/kern/vfs_syscalls.c:2345
#14 0xc055f6b3 in syscall2 (frame=0xce350d40)
at /usr/src/sys/platform/pc32/i386/trap.c:1310
#15 0xc0547fb6 in Xint0x80_syscall ()
at /usr/src/sys/platform/pc32/i386/exception.s:876
#16 0x0000001f in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(kgdb)
</pre></p>
<p>Dump on my leaf account;<br /><a class="external" href="http://leaf.dragonflybsd.org/~evocallaghan/hammer_vfs_panic.7z">http://leaf.dragonflybsd.org/~evocallaghan/hammer_vfs_panic.7z</a></p>
<p>Cheers,<br />Edward.</p> DragonFlyBSD - Bug #1198 (New): DDB loops panic in db_read_byteshttps://bugs.dragonflybsd.org/issues/11982009-01-05T22:50:04Zcorecode
<p>I have some panic which I can't debug because there is a flurry of panic<br />messages on my screen. The offender is:</p>
<p>sys/platform/pc32/i386/db_interface.c:208</p>
<p>I see that ddb uses longjmp, but seems that doesn't work here somehow.</p> DragonFlyBSD - Bug #599 (New): 1.9.0 reproducable panichttps://bugs.dragonflybsd.org/issues/5992007-04-11T10:24:26Zpavalos
<p>Here's a panic I'm getting with some pretty serious network (www) load, then <br />doing a netstat -an:</p>
<p>Unread portion of the kernel message buffer:<br />panic: m_copydata, negative off -1<br />mp_lock = 00000000; cpuid = 0; lapic.id = 00000000<br />boot() called on cpu#0</p>
<p>syncing disks... 5<br />done<br />Uptime: 12d22h0m32s</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc01954bb in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:370<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc01957c0 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/src/sys/kern/kern_shutdown.c:767<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc01c3a32 in m_copydata (m=0x0, off=0, len=0, cp=0xee9534b0 "\001\001<br />\b\n\006¦*$\035\bͬ") at /usr/src/sys/kern/uipc_mbuf.c:1014<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc020fc25 in tcp_output (tp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_output.c:690<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc02152bf in tcp_timer_persist (xtp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_timer.c:363<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01a6423 in softclock_handler (arg=0xc0386a80) <br />at /usr/src/sys/kern/kern_timeout.c:307<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc019d037 in lwkt_deschedule_self (td=Variable "td" is not available.<br />) at /usr/src/sys/kern/lwkt_thread.c:207<br />Previous frame inner to this frame (corrupt stack?)</p>
<p>The kernel and vmcore is being uploaded to leaf. The source is from March 28.</p>
<p>--Peter</p>