DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082023-06-25T01:52:18ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3353 (New): [panic] 'hammer2 pfs-create -t master' causes kernel panichttps://bugs.dragonflybsd.org/issues/33532023-06-25T01:52:18Zliweitianuxliweitianux@live.com
<p>liuyb1 on IRC reported on 2023-06-20 the following HAMMER2 PFS panic.</p>
<p>Reproduce steps:</p>
<pre><code class="shell syntaxhl" data-language="shell"><span class="nv">$ </span>hammer2 snapshot / snap-1
<span class="nv">$ </span>hammer2 <span class="nt">-u</span> <span class="sb">`</span>hammer2 pfs-clid snap-1<span class="sb">`</span> pfs-create snap-slave
<span class="nv">$ </span>hammer2 <span class="nt">-t</span> master <span class="nt">-u</span> <span class="sb">`</span>hammer2 pfs-clid snap-1<span class="sb">`</span> pfs-create snap-master
</code></pre>
<p>Kernel panic trace:</p>
<pre>
panic with 1 spinlocks held
panic: hammer2_chain_insert: collision 0xfffff8011e688a40 0xfffff8011e694580 (key=0000000000000400)
cpuid = 1
Trace beginning at frame 0xfffff80097df6a90
hammer2_chain_insert() at hammer2_chain_insert+0x15b 0xffffffff8096ca7b
hammer2_chain_insert() at hammer2_chain_insert+0x15b 0xffffffff8096ca7b
hammer2_chain_create() at hammer2_chain_create+0xded 0xffffffff809734ed
hammer2_chain_rename() at hammer2_chain_rename+0xd1 0xffffffff809740c1
hammer2_chain_rename_obref() at hammer2_chain_rename_obref+0x20 0xffffffff809741d0
hammer2_chain_indirect_maintenance() at hammer2_chain_indirect_maintenance+0x421 0xffffffff809746b1
</pre>
<p>Although the above PFS clustering commands are documented in the hammer2(8) man page, the clustering feature is actually still in early development phase, as per the <a href="https://gitweb.dragonflybsd.org/dragonfly.git/blob/HEAD:/sys/vfs/hammer2/DESIGN#l27" class="external">DESIGN</a> doc. So issues are expected and panic is quite possible.</p>
<p>Well, it should report the functionality is not implemented rather than panic the system.</p> DragonFlyBSD - Bug #3342 (In Progress): [PF] urpf-failed doesn't work with IPv6https://bugs.dragonflybsd.org/issues/33422023-02-06T08:55:53Zliweitianuxliweitianux@live.com
<p>Years ago, i found <code>urpf-failed</code> doesn't work with IPv6; significant packet loss if configured.</p>
<p>Many PF tutorials suggest a rule like:</p>
<pre>
block in quick from { $broken urpf-failed no-route } to any
</pre>
<p>But it turned out <code>urpf-failed</code> can only be configured for IPv4, like:</p>
<pre>
block in log quick inet from urpf-failed to any
</pre>
<p>See: <a class="external" href="https://lists.dragonflybsd.org/pipermail/users/2017-August/313577.html">https://lists.dragonflybsd.org/pipermail/users/2017-August/313577.html</a></p> DragonFlyBSD - Bug #3341 (New): [NVMM] Support AVX (and AVX2) in VMshttps://bugs.dragonflybsd.org/issues/33412023-02-06T08:47:14Zliweitianuxliweitianux@live.com
<p>NVMM currently only exports XSAVE to VMs, but no AVX/AVX2/AVX512 yet. However, some "modern" software require AVX to be able to run ...</p>
<p>To support AVX in NVMM, the FPU code must be improved/refactored first. Check NetBSD's FPU code overhaul. (also by maxv)</p>
<pre>
host# dmesg | grep Features2
Features2=0x77fafbff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,AVX,F16C,RDRND>
vm# dmesg | grep Features2
Features2=0xe6da3203<SSE3,PCLMULQDQ,SSSE3,FMA,CX16,PCID,SSE4.1,SSE4.2,MOVBE,POPCNT,AESNI,XSAVE,F16C,RDRND,VMM>
</pre> DragonFlyBSD - Bug #3310 (In Progress): NVMM+QEMU fail to boot with UEFI: Mem Assist Failed [gpa=...https://bugs.dragonflybsd.org/issues/33102022-01-09T14:27:25Zliweitianuxliweitianux@live.com
<p>NVMM+QEMU fail to boot with UEFI, for example:</p>
<pre>
% qemu-system-x86_64 \
-boot menu=on -display sdl -accel nvmm \
-drive file=OVMF_CODE.fd,if=pflash,format=raw,readonly=on \
-drive file=OVMF_VARS.fd,if=pflash,format=raw
NetBSD Virtual Machine Monitor accelerator is operational
qemu-system-x86_64: NVMM: Mem Assist Failed [gpa=0xfffff000]
qemu-system-x86_64: NVMM: Failed to execute a VCPU.
</pre>
<p>The UEFI firmware can be obtained by installing the <code>uefi-edk2-qemu-x86_64</code> package<br />or by downloading from: <a class="external" href="https://leaf.dragonflybsd.org/~aly/uefi/">https://leaf.dragonflybsd.org/~aly/uefi/</a></p>
<p>First reported by Mario Marietto and confirmed by me, see:<br /><a class="external" href="https://lists.dragonflybsd.org/pipermail/users/2022-January/404898.html">https://lists.dragonflybsd.org/pipermail/users/2022-January/404898.html</a></p> DragonFlyBSD - Bug #3294 (Closed): drill(1) with IPv6 NS fails with UDP but works with TCPhttps://bugs.dragonflybsd.org/issues/32942021-08-13T09:33:11Zliweitianuxliweitianux@live.com
<p>YONETANI Tomokazu reported this issue on users@ mailing list: <a class="external" href="https://lists.dragonflybsd.org/pipermail/users/2021-August/404805.html">https://lists.dragonflybsd.org/pipermail/users/2021-August/404805.html</a></p>
<pre>
$ drill @2001:4860:4860::8888 aaaa leaf.dragonflybsd.org | egrep -v
'^(\;|$)'
Error: error sending query: Could not send or receive, because of network
error
</pre>
<p>unless using TCP query:</p>
<pre>
$ drill -t @2001:4860:4860::8888 aaaa leaf.dragonflybsd.org | egrep -v
'^(\;|$)'
leaf.dragonflybsd.org. 3599 IN AAAA 2001:470:1:43b:1::68
</pre>
<p>Similar DNS queries on other boxes running different OSes don't have the same problem, and tcpdump output shows the response from the DNS server, so I doubt it's an network issue.</p>
<pre>
$ uname -a
DragonFly c60 6.0-RELEASE DragonFly v6.0.0.33.gc7b638-RELEASE #0: Wed Aug 4
20:25:25 JST 2021 root at c60:/usr/obj/build/usr/src/sys/X86_64_GENERIC x86_64
</pre>
<hr />
<p>I also confirmed this issue on leaf, which running master as of Aug 4.</p> DragonFlyBSD - Bug #3232 (Resolved): tun/tap: get error message 'create: bad value' when the inte...https://bugs.dragonflybsd.org/issues/32322020-04-08T10:55:28Zliweitianuxliweitianux@live.com
<p>When the interface already exists, ifconfig complains 'create: bad value', which is not clear enough compared to say that the interface already exists.</p>
Example:
<ol>
<li>ifconfig tun0 create</li>
<li>ifconfig tun0 create<br />ifconfig: create: bad value</li>
</ol> DragonFlyBSD - Bug #3230 (Resolved): tun/tap: conflict between devfs clone and interface clonehttps://bugs.dragonflybsd.org/issues/32302020-04-05T02:54:20Zliweitianuxliweitianux@live.com
<p>Start tinc, which will open '/dev/tun' to clone a tun device '/dev/tun0' and an interface 'tun0'</p>
<p>----------------------------------------------------------<br />tun0: flags=8043<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500<br /> inet 172.28.33.1 netmask 0xffffff00 broadcast 172.28.33.255<br /> inet6 fe80::1831:8020:48d2:c940%tun0 prefixlen 64 scopeid 0x5 <br /> Opened by PID 564370<br />----------------------------------------------------------</p>
<p>Issue 1: No default group 'tun' assigned to the 'tun0' interface.</p>
<p>Issue 2: Run 'ifconfig tun create' fails with: SIOCIFCREATE2: File exists</p> DragonFlyBSD - Submit #3125 (Resolved): periodic: Sync with FreeBSDhttps://bugs.dragonflybsd.org/issues/31252018-03-09T04:17:41Zliweitianuxliweitianux@live.com
<p>Hello,</p>
<p>For the "periodic" utility, FreeBSD has deprecated the use of "{daily,weekly,monthly}_status_security_<var>_enable" variables in favor of "security_status_<var>_enable" and "security_status_<var>_period" (daily, weekly, monthly). Therefore, the periodic scripts shipped with pkg(8) simply fail and only give error messages in daily security notification mails.</p>
<p>Here I propose the patches to sync our periodic utility with FreeBSD (current@2018-03-07). The patches are on my branch at GitHub:</p>
<p><a class="external" href="https://github.com/liweitianux/dragonflybsd/tree/periodic">https://github.com/liweitianux/dragonflybsd/tree/periodic</a></p>
<p>and consists of the following 5 commits:</p>
<ul>
<li>periodic(8): Sync with FreeBSD current</li>
<li>periodic: Remove already disabled monthly statistics report</li>
<li>periodic: Remove obsolete daily/status-named and weekly/clean-kvmdb</li>
<li>rc.d/accounting: Sync with FreeBSD</li>
<li>periodic: Sync with FreeBSD current</li>
</ul>
<p>I have tested my patches with a full rebuild and install, and tested "periodic {daily,weekly,monthly}" without problems.</p>
<p>But note that our "weekly/330.catman" has permission problems when enabled, while FreeBSD has already removed catman(1).</p>
<p>Thanks for reviewing and merging my patches.</p>
<p>Best regards,<br />Aaron</p> DragonFlyBSD - Submit #3104 (Closed): paths.h: Clean up _PATH_DEFPATHhttps://bugs.dragonflybsd.org/issues/31042017-11-21T12:04:05Zliweitianuxliweitianux@live.com
<p>Hello, this is a minor patch that does:</p>
<ul>
<li>Remove the pkgsrc paths (/usr/pkg/bin, /usr/pkg/sbin) from _PATH_DEFPATH</li>
<li>Fix _PATH_NOLOGIN to /sbin/nologin</li>
<li>Trim a semicolon from _PATH_STDPATH</li>
</ul>
<p>Please see the attached patch.</p>
<p>Cheers,<br />Aly</p> DragonFlyBSD - Submit #3100 (Closed): mountd(8): Bring some fixes from FreeBSD; update exports.5https://bugs.dragonflybsd.org/issues/31002017-11-10T13:25:08Zliweitianuxliweitianux@live.com
<p>Hello,</p>
<p>I have brought several fixes from FreeBSD for mountd(8) as well as exports.5 man page. The main changes are:</p>
<p>1. Change the default uid/gid from -2/-2 to nobody/nogroup (i.e., 65534/65533)<br />2. Fix the conversion of network prefix length to subnet mask<br />3. Update exports.5 man page as we actually support the CIDR network/prefixlength format<br />4. Fix several compilation warnings, and raise WARNS to 3</p>
<p>I have pushed the changes to my GitHub branch at:<br /><a class="external" href="https://github.com/liweitianux/dragonflybsd/tree/mountd">https://github.com/liweitianux/dragonflybsd/tree/mountd</a></p>
<p>Please take a look. Thanks.</p>
<p>Cheers,<br />Aly</p> DragonFlyBSD - Submit #3099 (Resolved): disklabel64: Fix partition 1MiB phsycial alignment; with ...https://bugs.dragonflybsd.org/issues/30992017-11-08T15:08:07Zliweitianuxliweitianux@live.com
<p>Hello,</p>
<p>According to disklabel64(8), the partitions within a slice are physically aligned to 1MiB (PALIGN_SIZE). However, there is a mistake in l64_makevirginlabel() in kern/subr_disklabel64.c, which causes the partitions are actually only 32KiB aligned. The proposed patches here fix this issue, and introduce some more updates and cleanups:</p>
<ul>
<li>Calculate d_pbase and d_pstop to make them both physically aligned to 1MiB;</li>
<li>Defined BOOT2SIZE64 in sys/disklabel64.h to replace the use of 32768;</li>
<li>Reserve space for the backup label at the slice end (after d_pstop), though the backup functionality not implemented;</li>
<li>Make the "auto" disk type optional, since the disk type support is not implement;</li>
<li>Update several comments, displayed disklabel descriptions;</li>
<li>Fix two compilation warnings due to the mismatched type in strncpy();</li>
<li>Add "static" keyword; cleanup unused variables and definitions.</li>
</ul>
<p>I also pushed these patches to my GitHub at:<br /><a class="external" href="https://github.com/liweitianux/dragonflybsd/tree/disklabel">https://github.com/liweitianux/dragonflybsd/tree/disklabel</a></p>
<p>Here is a comparison of the virgin labels generated:</p>
<p>diskinfo:<br />/dev/ad4s1 blksize=512 offset=0x000000007e00 size=0x002543158200 149.05 GB</p>
Before:<br />------------------------------------------------------------------------------
<ol>
<li>boot space: 1044992 bytes</li>
<li>data space: 156287323 blocks # 152624.34 MB (160038219264 bytes)</li>
</ol>
<p>boot2 data base: 0x000000001000<br />partitions data base: 0x000000100200<br />partitions data stop: 0x002543157000<br />backup label: 0x002543157000<br />total size: 0x002543158200 # 152625.34 MB<br />------------------------------------------------------------------------------</p>
After:<br />------------------------------------------------------------------------------
<ol>
<li>boot space: 1012224 bytes</li>
<li>data space: 156286976 blocks # 152624.00 MB (160037863424 bytes)</li>
</ol>
<p>boot2 data base: 0x000000001000<br />partitions data base: 0x0000000f8200<br />partitions data stop: 0x0025430f8200<br />backup label: 0x0025430f8200<br />total size: 0x002543158200 # 152625.34 MB<br />------------------------------------------------------------------------------</p>
<p>Thanks for reviewing these patches.</p>
<p>Aly</p> DragonFlyBSD - Bug #3093 (Closed): Dump failed: Partition too smallhttps://bugs.dragonflybsd.org/issues/30932017-10-23T10:13:04Zliweitianuxliweitianux@live.com
<p>Hello,</p>
<p>I'm running DFly on a VPS, and sometimes it crashes but failed to dump, showing such an error:</p>
<p>Dump failed. Partition too small.</p>
<p>Today I tested the crash dump on a VirtualBox machine with DFly-5.0.0 LiveCD. The virtual machine has 512 MB RAM, and configured with a 1 GB swap partition as the dump device. Then I manually broke into debugger "db>", type "call dumpsys", but it failed with code 0x23 and message "Dump failed. Partition too small." See also the attached screenshot 1.</p>
<p>As a comparison, I booted FreeBSD 11.1 LiveCD on the same virtual machine, set dump device, manually triggered the panic with "sysctl debug.kdb.panic=1". It dumped successfully and rebooted, then I saved the dump information. The attached screenshot 2 shows the information.</p>
<p>I also have a bare metal running DFly, and I can provide further tests if necessary.</p>
<p>Cheers,<br />Aly</p> DragonFlyBSD - Bug #3092 (Resolved): dumpon: sysctl: kern.dumpdev: Device busyhttps://bugs.dragonflybsd.org/issues/30922017-10-23T09:42:49Zliweitianuxliweitianux@live.com
<p>Hello, I had the problem that "dumpon" couldn't let me change the dump device, which complained "Device busy", as shown below:</p>
<p>dfly# dumpon /dev/serno/WD-WCAS25235448.s1b<br />dumpon: sysctl: kern.dumpdev: Device busy</p>
<p>(I previously wrongly configured the "dumpdev" to be "/dev/mapper/swap", which is the mapped name of the dm-crypt'ed swap device. Therefore I used "dumpon" to change the dump device the underlying swap device but failed, as shown here.)</p>
<p>In addition, I also did such a test within a virtual machine using DFly-5.0. After partitioned the swap partition, I can "dumpon" to it successfully at the first time, but later it fails with "Device busy" error. See also the attached screenshot.</p>
<p>I suspect this issue exists in the "setdumpdev()" function in "sys/kern/kern_shutdown.c".</p>
<p>Cheers,<br />Aly</p> DragonFlyBSD - Bug #3091 (Resolved): swapinfo print major/minor number when device name too longhttps://bugs.dragonflybsd.org/issues/30912017-10-23T08:16:29Zliweitianuxliweitianux@live.com
<p>Hi, I enabled swap encryption by adding the "crypt" option to "/etc/fstab", therefore the auto configured swap device has a rather long name ("swap-<serno>.<slice><part>"), which seems, I think, cause problem for the "swapinfo"/"pstat" tool. For example:</p>
<p>dfly# ls <del>l /dev/mapper/swap-WD-WCAS25235448.s1b<br />crw-r----</del> 1 root operator 65, 0x1e11000f Oct 21 08:26 /dev/mapper/swap-WD-WCAS25235448.s1b</p>
<p>dfly# swapinfo <br />Device 1K-blocks Used Avail Capacity Type<br />/dev/#C65:0x1e11000f 16777216 1141708 15635508 7% Interleaved<br /> ^<sup>^^</sup>^^^^^^^^^^</p>
<p>By default, "swapinfo" should print the device "name" instead of such major/minor numbers.</p>
<p>Cheers,<br />Aly</p> DragonFlyBSD - Bug #3090 (Closed): VirtIO/vtnet: very poor IPv6 receiving performance (~100x slower)https://bugs.dragonflybsd.org/issues/30902017-10-22T16:37:28Zliweitianuxliweitianux@live.com
<p>Hello,</p>
<p>I'm running DFly (4.8.0) on a VPS with both IPv4 and IPv6. However, I'm suffering poor <strong>IPv6 receiving</strong> performance (~100x slower!!), but IPv6 sending, IPv4 sending and receiving all perform very well.</p>
<p>I carried out several tests, and finally I'm quite sure that the problem exists in DFly's VirtIO driver.</p>
<p>I booted DFly (5.0), FreeBSD (11), and Arch Linux (201705) LiveCDs on my VPS, and configured basic IPv6 connection, then tried to download a 128 MB test file from the VPS provider's service. The followings are the test results:</p>
<p>OS (virtio) | download speed<br />------------+----------------------------<br />DFly (IPv6) | ~80 kB/s (NOTE: it's kB/s)<br />FBSD (IPv6) | ~85 MB/s<br />Arch (IPv6) | ~83 MB/s</p>
<p>In addition, I used "iperf3" to further test the sending/receiving bandwidth of my VPS (located in the Netherlands), against a iperf server in France (bouygues.iperf.fr):</p>
<p>OS/driver | DFly (virtio) | DFly (em)<br />-----------------+----------------+-----------<br />IPv4 - sending | ~240 Mb/s | ~170 Mb/s<br />IPv4 - receiving | ~400 Mb/s | ~430 Mb/s<br />IPv6 - sending | ~160 Mb/s | ~115 Mb/s<br />IPv6 - receiving | ~0.5 Mb/s (!!) | ~370 Mb/s</p>
<p>To confirm the problem, I switched the network driver of my VPS from "VirtIO" to "Intel PRO/1000" (i.e., "em" driver) and redo the tests, which gave me very impressing IPv6 performance!</p>
<p>NOTE: I did these tests yesterday and today, so the reported network bandwidth might vary much. But the VirtIO IPv6 receiving problem is very obvious!</p>
<p>I'm not sure whether this problem is due to the maybe out-of-date virtio driver (compared to FreeBSD's), or because it does not well integrate with the DFly system.</p>
<p>Cheers,<br />Aly</p>