DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082021-03-14T16:43:03ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3266 (New): Filesystems broken due to "KKASSERT(count & TOK_COUNTMASK);"https://bugs.dragonflybsd.org/issues/32662021-03-14T16:43:03Ztkusumikusumi.tomohiro@gmail.com
<p>Many fs including HAMMER2 are broken due to this assert failure.<br />Confirmed the panic with HAMMER2 and ext2.<br />It didn't happen a few months ago.</p>
<p>433 static __inline<br />434 void<br />435 _lwkt_reltokref(lwkt_tokref_t ref, thread_t td)<br />436 {<br />...<br />454 /*<br />455 * We are a shared holder<br />456 <strong>/<br />457 count = atomic_fetchadd_long(&tok->t_count, <del>TOK_INCR);<br />458 <abbr title="count & TOK_COUNTMASK">KKASSERT</abbr>; /</strong> count prior */ <----------</del><br />459 }<br />460 }</p> DragonFlyBSD - Bug #3205 (Feedback): Go compiler net test failinghttps://bugs.dragonflybsd.org/issues/32052019-09-18T13:28:07ZAnonymous
<p>A recent commit appears to have broken the net test for the Go compiler:<br /><a class="external" href="https://build.golang.org/log/58be31cfd1a92ba9582fdf33e01f79e03184e59b">https://build.golang.org/log/58be31cfd1a92ba9582fdf33e01f79e03184e59b</a></p>
<p>This was working on commit be02f354 and started failing when I upgraded to b7d3e1.</p> DragonFlyBSD - Bug #3194 (New): Hammer kernel crash on mirror-stream of PFS after upgrade (assert...https://bugs.dragonflybsd.org/issues/31942019-06-29T19:53:41ZAnonymous
<p>I operate two HAMMER arrays, each with PFSs that are mirrored via ``hammer mirror-stream`` from an array master to an array slave.</p>
<p>I recently upgraded to DragonFly 5.6.</p>
<p>After 18 hours of activity, I encountered a kernel fault that reliably can be provoked by running ``hammer mirror-stream`` on one PFS.</p>
<p>I've disabled PFS mirroring until this can be corrected.</p>
<p>Here is the transcribed error from the kernel fault:</p>
<p>```<br />panic: assertion "cursor->flags & HAMMER_CURSOR_ITERATE_CHECK" failed in hammer_btree_iterate at /usr/src/sys/vfs/hammer/hammer_btree.c:263<br />cpuid = 2<br />Trace beginning at frame 0xfffff801e6875158<br />hammer_btree_iterate() at hammer_btree_iterate+0x839 0xffffffff80900db9<br />hammer_btree_iterate() at hammer_btree_iterate+0x839 0xffffffff80900db9<br />hammer_mirror_delete_to.irsra.2() at hammer_mirror_delete_to.isra.2+0x12 0xffffffff80914d92<br />hammer_ioc_mirror_write() at hammer_ioc_mirror_write+0x3b7 0xffffffff809159a7<br />hammer_ioctl() at hammer_ioctl+0xebe 0xffffffff809143de<br />hammer_vop_ioctl() at hammer_vop_ioctl+0x48 0xffffffff8092d458<br />Debugger("panic")</p>
<p>CPU2 stopping CPUs: 0x0000000b<br /> stopped<br />Stopped at Debugger+0x7c: movb $0,0xfc9be9(%rip)<br />db> <br />```</p>
<p>Unfortunately, nothing was saved to /var/crash for this fault except an empty ``kern.0`` file.</p> DragonFlyBSD - Bug #3129 (New): Kernel panic with 5.2.0 on A2SDi-4C-HLN4Fhttps://bugs.dragonflybsd.org/issues/31292018-04-11T17:20:15Zstateless
<p>I tried to boot 5.2.0 on A2SDi-4C-HLN4F from Supermicro<br />and I got the following kernel panic:</p>
<p><a class="external" href="https://u.2f30.org/sin/2018-04-11-175500_1366x768_scrot.png">https://u.2f30.org/sin/2018-04-11-175500_1366x768_scrot.png</a></p>
<p>See: <a class="external" href="http://bxr.su/DragonFly/sys/kern/subr_cpu_topology.c#125">http://bxr.su/DragonFly/sys/kern/subr_cpu_topology.c#125</a></p> DragonFlyBSD - Bug #3124 (New): DragonFlyBSD 5.0.2 with Hammer2 with UEFI install doesn't boothttps://bugs.dragonflybsd.org/issues/31242018-03-04T09:01:41Zwiesl
<p>DragonFlyBSD 5.0.2 with Hammer2 with UEFI install doesn't boot<br />FATAL: Could not read from the boot medium! System halted.<br />Legacy BIOS install is OK.<br />Looks like that the boot installer isn't run.</p> DragonFlyBSD - Bug #3113 (In Progress): Booting vKernel fails due being out of swap spacehttps://bugs.dragonflybsd.org/issues/31132017-12-17T19:17:50Ztcullen
<p>The step by step directions at <a class="external" href="https://www.dragonflybsd.org/docs/handbook/vkernel/">https://www.dragonflybsd.org/docs/handbook/vkernel/</a> fail because when you attempt to boot the vkernel you never get a shell prompt because the shell is instantly and repeatedly killed due to a never ending stream of "out of swap space" error messages.</p> DragonFlyBSD - Bug #3111 (In Progress): Mouse lags every second heavily under X11https://bugs.dragonflybsd.org/issues/31112017-12-10T17:30:40Zmneumann
<p>Mouse moves normal for one second, then completely stops for roughly one second, then moves again for a second.<br />The whole repeats infinitivly.</p>
<p>This commit has <strong>no</strong> mouse lag: e0a1e7abb95f53e4b0c633f57fbd3ba163a98e73<br />This commit has mouse lag: d6c92fb146a95bd38feaa94c8d2bafda63600e8e</p>
<p>I first assumed it is either one of the commits related to evdev support, so I reverted them:</p>
<pre><code>d3d1dd3e4513b2ab753f8ba52f144dc916420ba6<br /> 3ea800bb832ad69c10f85ce9bce98efd8e892285<br /> eaf0d054af4aa304548c1efc497aad966b86a590</code></pre>
<p>But the mouse lag still happens. Next I reverted the recent drm/radeon commit:</p>
<pre><code>d235ee5f3490f63ba738915974154a0e2e49378d</code></pre>
<p>But mouse lag still there. So I assume it must be one of dillon's commits on 5th or 6th December.</p>
<p>Anyone else seeing the problem? I can easily bisect the problem, as it's only 6 commits. But maybe dillon can see the problem faster.</p> DragonFlyBSD - Bug #2930 (New): 'objcache' causes panic during 'nfs_readdir'https://bugs.dragonflybsd.org/issues/29302016-07-26T20:00:23Ztofergus
<p>'vkernel' with '/home' file system mounted from host produces the following panic when trying to read a directory with 10s of 1000s of files. Increasing memory to the 'vkernel' avoids the issue.</p>
<p>panic: NFS node: malloc limit exceeded<br />cpuid = 0<br />Trace beginning at frame 0x80291f70a0<br />panic() at 0x4bc587<br />panic() at 0x4bc587<br />kmalloc() at 0x4b87aa<br />objcache_malloc_alloc() at 0x4af344<br />objcache_get() at 0x4afba5<br />nfs_nget_nonblock() at 0x5f7ab4<br />Debugger("panic")</p>
<p>CPU0 stopping CPUs: 0x0000000000000000<br /> stopped<br />Stopped at 0x6ab941: movb $0,0x1165564(%rip)</p> DragonFlyBSD - Bug #2915 (New): Hammer mirror-copy problemhttps://bugs.dragonflybsd.org/issues/29152016-05-17T00:23:36ZAnonymous
<p>DragonFly v4.5.0.843.gfe3b7-DEVELOPMENT</p>
<p>When I mirror copy a master to a slave and then upgrade the slave, the new master pfs can't be mirror copied. This is reproducible but only between two distinct hammer filesystems. If you do it all on the same filesystem, the problem doesn't appear to occur.</p>
<p>boojum# hammer pfs-master /pfs/master<br />Creating PFS <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: Buildworld error/panic (Closed)" href="https://bugs.dragonflybsd.org/issues/13">#13</a> succeeded!<br />/pfs/master<br /> sync-beg-tid=0x0000000000000001<br /> sync-end-tid=0x00000001b44c0b20<br /> shared-uuid=1191b9a4-1bc4-11e6-8e1d-418d5cb760e2<br /> unique-uuid=1191b9aa-1bc4-11e6-8e1d-418d5cb760e2<br /> label="" <br /> prune-min=00:00:00<br /> operating as a MASTER<br /> snapshots directory defaults to /var/hammer/<pfs><br />boojum# cp /COPYRIGHT /pfs/master/ <br />boojum# ls <del>l /pfs/master/<br />total 13<br />-r--r--r-</del> 1 root wheel 6686 16-May-2016 17:12 COPYRIGHT<br />boojum# hammer <del>y mirror-copy /pfs/master /volumes/BACKUP3/pfs/slave <br />PFS slave /volumes/BACKUP3/pfs/slave does not exist. Auto create new slave PFS!<br />Creating PFS <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: panic (Closed)" href="https://bugs.dragonflybsd.org/issues/31">#31</a> succeeded!<br />/volumes/BACKUP3/pfs/slave<br /> sync-beg-tid=0x0000000000000001<br /> sync-end-tid=0x0000000000000001<br /> shared-uuid=1191b9a4-1bc4-11e6-8e1d-418d5cb760e2<br /> unique-uuid=2e551d04-1bc4-11e6-8e1d-418d5cb760e2<br /> label="" <br /> prune-min=00:00:00<br /> operating as a SLAVE<br /> snapshots directory defaults to /var/hammer/<pfs><br />Prescan to break up bulk transfer<br />Prescan 1 chunks, total 0 MBytes (7296)<br />Mirror-read /pfs/master succeeded<br />boojum# ls -l /volumes/BACKUP3/pfs/slave/<br />total 13<br />-r--r--r-</del> 1 root wheel 6686 16-May-2016 17:12 COPYRIGHT<br />boojum# hammer pfs-upgrade /volumes/BACKUP3/pfs/slave <br />pfs-upgrade of PFS#31 () succeeded<br />boojum# hammer -y mirror-copy /volumes/BACKUP3/pfs/slave /volumes/BACKUP3/pfs/slave2<br />PFS slave /volumes/BACKUP3/pfs/slave2 does not exist. Auto create new slave PFS!<br />Creating PFS <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: Better news about that wi panic (Closed)" href="https://bugs.dragonflybsd.org/issues/32">#32</a> succeeded!<br />/volumes/BACKUP3/pfs/slave2<br /> sync-beg-tid=0x0000000000000001<br /> sync-end-tid=0x0000000000000001<br /> shared-uuid=1191b9a4-1bc4-11e6-8e1d-418d5cb760e2<br /> unique-uuid=5ef43f68-1bc4-11e6-8e1d-418d5cb760e2<br /> label="" <br /> prune-min=00:00:00<br /> operating as a SLAVE<br /> snapshots directory defaults to /var/hammer/<pfs><br />Prescan to break up bulk transfer<br />Prescan 1 chunks, total 0 MBytes (0)<br />Mirror-read /volumes/BACKUP3/pfs/slave succeeded<br />boojum# ls -l /volumes/BACKUP3/pfs/slave2 <br />lrwxr-xr-x 1 root wheel 10 16-May-2016 17:14 /volumes/BACKUP3/pfs/slave2 -> @@0x00000001000420d0:00032<br />boojum# ls -l /volumes/BACKUP3/pfs/slave2/<br />ls: /volumes/BACKUP3/pfs/slave2/: No such file or directory<br />boojum#</p> DragonFlyBSD - Bug #2735 (New): iwn panics SYSSASSERThttps://bugs.dragonflybsd.org/issues/27352014-11-14T22:04:08Zcnbcneirabustos@gmail.com
<p>iwn driver panics with SYSASSERT error that is described in this link<br /><a class="external" href="http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html">http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html</a></p> DragonFlyBSD - Bug #2499 (In Progress): DRAGONFLY_3_2 lockd not responding correctlyhttps://bugs.dragonflybsd.org/issues/24992013-01-22T08:41:27ZNerzhul
<p>Hello,<br />i must use lockd for concurrent access on a webserver with nfs extended storage. There is some concurrent access and lockd isn't responding correctly.</p>
<p>On the NFSv3 client, timeout appears and console logs:<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again</p>
<p>After "netstat -an -f inet" i see there is a queue on rpc socket</p>
<p>netstat -an -f inet</p>
<p>Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.977 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.611 ESTABLISHED<br />tcp4 0 0 localhost.smtp <strong>.* LISTEN<br />tcp4 0 0 *.ssh *.</strong> LISTEN<br />tcp4 0 0 <strong>.1017 *.</strong> CLOSED<br />tcp4 0 0 <strong>.1020 *.</strong> LISTEN<br />tcp4 0 0 <strong>.nfsd *.</strong> LISTEN<br />tcp4 0 0 <strong>.1023 *.</strong> LISTEN<br />tcp4 0 0 <strong>.1022 *.</strong> LISTEN<br />tcp4 0 0 <strong>.sunrpc *.</strong> LISTEN<br />tcp4 0 0 A.B.C.65.nfsd A.B.C.96.811 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.972 ESTABLISHED<br />tcp4 0 48 A.B.C.65.ssh 129.175.196.190.60067 ESTABLISHED<br />udp4 0 0 <strong>.918 *.</strong> <br />udp4 0 0 A.B.C.65.1028 ntp.u-psud.fr.ntp <br />udp4 456 0 <strong>.1017 *.</strong> <br />udp4 18656 0 <strong>.1018 *.</strong> <br />udp4 0 0 <strong>.nfsd *.</strong> <br />udp4 0 0 <strong>.1021 *.</strong> <br />udp4 0 0 <strong>.1020 *.</strong> <br />udp4 0 0 <strong>.1022 *.</strong> <br />udp4 0 0 <strong>.sunrpc *.</strong></p>
<p>When i see that, i make tcpdump -nni em0 to see what's happening:</p>
<p>22:12:42.781597 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:48.801935 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:54.669917 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:13:00.148965 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212</p>
<p>After a little time, lockd respond to all request, but many failed because of timeout</p>
<p>On the dragonflyBSD server i can see this in /var/log/messages</p>
<p>Jan 21 22:14:19 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:19 webfiler1 last message repeated 3 times<br />Jan 21 22:14:19 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.<br />Jan 21 22:14:29 webfiler1 dntpd<sup><a href="#fn571">571</a></sup>: issuing offset adjustment: 0.026637s<br />Jan 21 22:14:44 webfiler1 rpc.lockd: rpc to statd failed: RPC: Timed out<br />Jan 21 22:14:44 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:44 webfiler1 last message repeated 3 times<br />Jan 21 22:14:44 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.</p>
<p>I think there is a problem on DragonFlyBSD which queue many lockd requests.</p> DragonFlyBSD - Bug #2423 (New): After multiple panics/locks, hitting KKASSERT in hammer_init_cursorhttps://bugs.dragonflybsd.org/issues/24232012-09-17T17:12:40Zrumcic
<p>After quite a few locks (machine stopped responding, nothing on serial console, unable to panic the machine), the hammer FS (v6) got a corrupted(?) UNDO.</p>
<p>Booting the latest snapshot and mounting with ro,noatime it looked as if it was able to rerun the UNDO/FIFO. But trying to mount the "clean" FS result in triping over a KKASSERT (error == 0 at hammer_cursor.c:202)</p> DragonFlyBSD - Bug #2141 (New): loader and/or documentation brokenhttps://bugs.dragonflybsd.org/issues/21412011-10-09T14:15:29Zsjg
<p>For example,</p>
<pre><code>The ehci driver is automatically loaded upon boot. To disable this<br /> behavior temporarily, the ehci_load variable can be unset at the loader<br /> prompt (see loader(8)). To disable it permanently, the<br /> hint.ehci.0.disabled tunable can be set to 1 in /boot/loader.conf.</code></pre>
<p>But when operating from the loader prompt the ehci_load variable has no effect <br />at all, it seems to only be checked from the menu, which is useless if you are <br />operating from the prompt.</p>
<p>This is confusing at best, but I am leaning more towards steaming pile. The <br />loader or the documentation needs to be reworked.</p> DragonFlyBSD - Bug #1921 (In Progress): we miss mlockallhttps://bugs.dragonflybsd.org/issues/19212010-11-24T16:19:21Zalexh
<p>We don't have the mlockall/munlockall syscalls as documented in [1]. We have at <br />least one tool in base that would benefit from it: cryptsetup. Hopefully someone <br />more familiar with the VM system can implement it without much effort as we <br />already have mlock/munlock.</p>
<p>Cheers,<br />Alex Hornung</p>
<p>[1]: <a class="external" href="http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html">http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html</a></p> DragonFlyBSD - Bug #599 (New): 1.9.0 reproducable panichttps://bugs.dragonflybsd.org/issues/5992007-04-11T10:24:26Zpavalos
<p>Here's a panic I'm getting with some pretty serious network (www) load, then <br />doing a netstat -an:</p>
<p>Unread portion of the kernel message buffer:<br />panic: m_copydata, negative off -1<br />mp_lock = 00000000; cpuid = 0; lapic.id = 00000000<br />boot() called on cpu#0</p>
<p>syncing disks... 5<br />done<br />Uptime: 12d22h0m32s</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc01954bb in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:370<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc01957c0 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/src/sys/kern/kern_shutdown.c:767<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc01c3a32 in m_copydata (m=0x0, off=0, len=0, cp=0xee9534b0 "\001\001<br />\b\n\006¦*$\035\bͬ") at /usr/src/sys/kern/uipc_mbuf.c:1014<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc020fc25 in tcp_output (tp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_output.c:690<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc02152bf in tcp_timer_persist (xtp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_timer.c:363<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01a6423 in softclock_handler (arg=0xc0386a80) <br />at /usr/src/sys/kern/kern_timeout.c:307<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc019d037 in lwkt_deschedule_self (td=Variable "td" is not available.<br />) at /usr/src/sys/kern/lwkt_thread.c:207<br />Previous frame inner to this frame (corrupt stack?)</p>
<p>The kernel and vmcore is being uploaded to leaf. The source is from March 28.</p>
<p>--Peter</p>