DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082023-12-23T19:11:01ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3365 (New): NFSv4 ACLs for HAMMER2https://bugs.dragonflybsd.org/issues/33652023-12-23T19:11:01ZvinipsmakerDragonFlyBSD - Bug #3364 (New): aio_read2() and aio_write2()https://bugs.dragonflybsd.org/issues/33642023-12-23T19:09:19Zvinipsmaker
<p>I'd like to use AIO to read files w/o specifying an offset in the member aio_offset. Instead, I'd like for the current file offset to be used. For Linux, io_uring supports using -1 for the offset to indicate that the current file position should be used instead of passing in an explicit offset. I'd like to do the same on DragonFly BSD.</p>
<p>I propose the following interface:</p>
<p>int aio_read2(struct aiocb *iocb, unsigned flags);<br />int aio_write2(struct aiocb *iocb, unsigned flags);</p>
<p>aio_read(iocb) would be equivalent to aio_read2(iocb, 0) and aio_write(iocb) would be equivalent to aio_write2(iocb, 0).</p>
<p>Then we would define the following flags:</p>
<p>AIO_IGNOREOFFSET</p>
<p>The flag AIO_IGNOREOFFSET would instruct the call to ignore aio_offset in aiocb and use the file position (lseek) if applicable. The flag AIO_IGNOREOFFSET should not conflict with LIO opcodes so one could OR it into aio_lio_opcode for usage with lio_listio() as well. I think that should be enough to close all ties.</p> DragonFlyBSD - Bug #3363 (New): sysctl CTL_NET.PF_ROUTE.0.0.NET_RT_IFLIST.0 blocks in an uninterr...https://bugs.dragonflybsd.org/issues/33632023-12-22T07:11:53Zguy
<p>The attached program blocks, reliably, in an uninterruptible wait on my DragonFly BSD 6.4 VM if I run it and then:</p>
<p>plug in a USB Wi-FI interface and tell VMware Fusion to attach it to the running VM;</p>
<p>wait for the interface to be attached and to be recognized by the program;</p>
<p>unplug the interface.</p>
<p>It hands in my_getifaddrs() (which is a copy of the DFly BSD getifaddrs()) when it's called after the program receives an RTM_IFINFO message. The hang is in the first sysctl() it does in the loop.</p>
<p>The RTM_IFINFO message is indicating that the carrier state of the removed interface is unknown; after that, an RTM_IFANNOUNCE message is received, indicating that the interface in question was removed, and then <strong>another</strong> RTM_IFINFO message, for the same interface, is received, again indicating that the carrier state is unknown.</p>
<p>I suspect this will happen on real hardware as well. It may also occur for, for example, Ethernet interfaces, and interfaces on pluggable-at-run-time buses other than USB.</p> DragonFlyBSD - Bug #3361 (New): newlocale()/freelocale() leak https://bugs.dragonflybsd.org/issues/33612023-11-16T01:24:12Ztonyc
<p>The attached code shows the combination of newlocale() to create a new locale object and freelocale() to free that object leaks memory.<br /><code>$ ./newlocale_loop<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 4 178 0 4124 680 wait S0+ 0 0:00.00 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 7.63G 416616 wait S1+ 0 0:05.37 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 15.26G 832272 wait S1+ 0 0:09.62 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 22.89G 1.19G wait S1+ 0 0:12.80 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 30.52G 1.58G wait S1+ 0 0:17.34 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 38.15G 1.98G wait S1+ 0 0:18.68 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 45.78G 2.37G wait S1+ 0 0:23.65 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 53.41G 2.77G wait S1+ 0 0:26.65 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 61.04G 3.17G wait S1+ 0 0:30.35 ./n<br /> UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND<br /> 1001 825 822 80 254 0 68.67G 3.56G wait S1+ 0 0:35.02 ./n<br /></code></p>
<p>Note RSS going from 680 to 3.56G</p>
<p>Building and running the same code on Linux (Debian bookworm) and Freebsd 15-CURRENT does not leak in the same way.</p> DragonFlyBSD - Bug #3359 (New): Too many zlibhttps://bugs.dragonflybsd.org/issues/33592023-10-18T04:21:25Ztkusumikusumi.tomohiro@gmail.com
<p>Recently added hammer2(8) recover directive has added yet another zlib to HAMMER2, this time in userspace.<br />This is a third copy of zlib in HAMMER2 related source.<br />Why not just use contrib/zlib-1.2, or if not possible why ?</p>
<p>hammer2(8) recover directive has also added a userspace copy of LZ4.<br />This one can easily reuse sys/vfs/hammer2/*lz4*.<br />All we need is a few KERNEL ifdefs around malloc/free related and header includes.</p> DragonFlyBSD - Bug #3353 (New): [panic] 'hammer2 pfs-create -t master' causes kernel panichttps://bugs.dragonflybsd.org/issues/33532023-06-25T01:52:18Zliweitianuxliweitianux@live.com
<p>liuyb1 on IRC reported on 2023-06-20 the following HAMMER2 PFS panic.</p>
<p>Reproduce steps:</p>
<pre><code class="shell syntaxhl" data-language="shell"><span class="nv">$ </span>hammer2 snapshot / snap-1
<span class="nv">$ </span>hammer2 <span class="nt">-u</span> <span class="sb">`</span>hammer2 pfs-clid snap-1<span class="sb">`</span> pfs-create snap-slave
<span class="nv">$ </span>hammer2 <span class="nt">-t</span> master <span class="nt">-u</span> <span class="sb">`</span>hammer2 pfs-clid snap-1<span class="sb">`</span> pfs-create snap-master
</code></pre>
<p>Kernel panic trace:</p>
<pre>
panic with 1 spinlocks held
panic: hammer2_chain_insert: collision 0xfffff8011e688a40 0xfffff8011e694580 (key=0000000000000400)
cpuid = 1
Trace beginning at frame 0xfffff80097df6a90
hammer2_chain_insert() at hammer2_chain_insert+0x15b 0xffffffff8096ca7b
hammer2_chain_insert() at hammer2_chain_insert+0x15b 0xffffffff8096ca7b
hammer2_chain_create() at hammer2_chain_create+0xded 0xffffffff809734ed
hammer2_chain_rename() at hammer2_chain_rename+0xd1 0xffffffff809740c1
hammer2_chain_rename_obref() at hammer2_chain_rename_obref+0x20 0xffffffff809741d0
hammer2_chain_indirect_maintenance() at hammer2_chain_indirect_maintenance+0x421 0xffffffff809746b1
</pre>
<p>Although the above PFS clustering commands are documented in the hammer2(8) man page, the clustering feature is actually still in early development phase, as per the <a href="https://gitweb.dragonflybsd.org/dragonfly.git/blob/HEAD:/sys/vfs/hammer2/DESIGN#l27" class="external">DESIGN</a> doc. So issues are expected and panic is quite possible.</p>
<p>Well, it should report the functionality is not implemented rather than panic the system.</p> DragonFlyBSD - Bug #3352 (New): HAMMER2 ioctl(HAMMER2IOC_DESTROY) is brokenhttps://bugs.dragonflybsd.org/issues/33522023-06-21T08:29:16Ztkusumikusumi.tomohiro@gmail.com
<p>1. hammer2(8) destroy directive - This succeeds with "ok" message, but after unmount + mount, the same file is still there.</p>
<p>The reason is because it currently just unlinks dirent chain from parent directory (ip) chain,<br />and returns without modifying parent directory which can cause flusher to flush.<br />Both VOP_NREMOVE and VOP_NRMDIR modify dip, but this one does not.</p>
<pre><code class="shell syntaxhl" data-language="shell"><span class="c"># mount_hammer2 /dev/vn0 /mnt</span>
<span class="c"># hammer2 destroy /mnt/src/Makefile</span>
/mnt/src/Makefile ok
<span class="c"># hammer2 destroy /mnt/src/Makefile</span>
/mnt/src/Makefile No such file or directory
<span class="c"># umount /mnt</span>
<span class="c"># mount_hammer2 /dev/vn0 /mnt</span>
<span class="c"># hammer2 destroy /mnt/src/Makefile</span>
/mnt/src/Makefile ok
<span class="c"># hammer2 destroy /mnt/src/Makefile</span>
/mnt/src/Makefile No such file or directory
</code></pre>
<p>2. hammer2(8) destroy-inum directive - This simply fails. It seems hammer2_chain_lookup() can't find the chain via inode number.</p>
<pre><code class="shell syntaxhl" data-language="shell"><span class="c"># mount_hammer2 /dev/vn0 /mnt</span>
<span class="c"># ls -li /mnt/src/README</span>
1104 <span class="nt">-rw-r--r--</span> 1 root wheel 10989 Jun 21 00:58 /mnt/src/README
<span class="c"># hammer2 -s /mnt destroy-inum 1104</span>
deleting inodes on /mnt
1104 No such file or directory
<span class="c"># hammer2 -s /mnt destroy-inum 0x450</span>
deleting inodes on /mnt
1104 No such file or directory
</code></pre> DragonFlyBSD - Bug #3351 (New): XDM and GDM3 display manager under DragonFlyBSD 6.4: After starti...https://bugs.dragonflybsd.org/issues/33512023-06-17T15:22:07Zadrian
<p>Dear Maintainer,</p>
<p>currently under DragonFlyBSD 6.4 and with up-to-date packages from pkg package manager, XDM and GDM3 fail to load correctly.</p>
<p>I am able to start a local Xorg session running CTWM with the 'startx' commando. This works!</p>
<p>So the Xorg configuration works. With the same settings, XDM worked under DragonFlyBSD 6.2.</p>
<p>I am running DragonFlyBSD 6.4 as VM under a Debian Linux/testing KVM/qemu host.</p>
<p>Thank you very much in advance.</p>
<p>Sincerely,</p>
<p>Adrian Kieß</p> DragonFlyBSD - Bug #3350 (New): tmux: Terminfo file usr/share/terminfo/t/tmux-256color does not w...https://bugs.dragonflybsd.org/issues/33502023-06-17T07:43:41Zadrian
<p>Dear Maintainer,</p>
<p>the terminfo file /usr/share/terminfo/t/tmux-256color does not work correctly, when logging into DragonFlyBSD from a remote tmux session.</p>
<p>Running:</p>
<p>export TERM=xterm-256color</p>
<p>after logging in, fixes the broken terminal.</p>
<p>I am using:</p>
<p>root@dragonflybsd /home/adrian
# uname -a <br />DragonFly dragonflybsd.v-zone.lan.dac 6.4-RELEASE DragonFly v6.4.0.26.g43e53c-RELEASE <a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a>: Mon Mar 13 12:12:53 CET 2023 <a class="email" href="mailto:adrian@dragonflybsd.v-zone.lan.dac">adrian@dragonflybsd.v-zone.lan.dac</a>:/usr/obj/usr/src/sys/X86_64_GENERIC x86_64</p>
<p>Thanks a lot in advance.</p>
<p>Sincerely,</p>
<p>Adrian Kieß</p> DragonFlyBSD - Bug #3348 (New): Panic when trying to mount_hammer2 a filehttps://bugs.dragonflybsd.org/issues/33482023-03-30T20:41:10Zzabolekar
<p>Dragonfly 6.4.0.</p>
<p>Observed behavior: when I create a HAMMER2 file system on a regular file and try to mount it, I get a panic.<br />Expected behavior: Maybe the program should exit with an error message? I suspect I'm not supposed to do this kind of thing: the man page of mount_hammer2 says the file should be <em>special</em>.</p>
<p>To reproduce, as root:</p>
<pre><code class="shell syntaxhl" data-language="shell"><span class="nb">truncate</span> <span class="nt">-s</span> 1G hammer2.img
newfs_hammer2 hammer2.img
<span class="nb">mkdir </span>mnt
mount_hammer2 ~/hammer2.img mnt
</code></pre>
<p>This is the output after the last command:</p>
<pre>
hammer2_mount: devstr="/root/hammer2.img@DATA"
hammer2_mount: device="/root/hammer2.img" label="DATA" rdonly=0
panic: assertion "strncmp(path, "/dev/", 5) == 0" failed in hammer2_init_devvp a
t /usr/src/sys/vfs/hammer2/hammer2_ondisk.c:197
cpuid = 0
Trace beginning at frame 0xfffff800a9bbac18
hammer2_init_devvp() at hammer2_init_devvp+0x48b 0xffffffff8098589b
hammer2_init_devvp() at hammer2_init_devvp+0x48b 0xffffffff8098589b
hammer2_vfs_mount() at hammer2_vfs_mount+0x379 0xffffffff8096a4d9
sys_mount() at sys_mount+0x33b 0xffffffff80702d7b
syscall2() at syscall2+0x11e 0xffffffff80bdc0ae
Debugger("panic")
CPU0 stopping CPUs: 0x00000000
stopped
Stopped at Debugger+0x7c: movb $0,0xbcbd09(%rip)
db>
</pre> DragonFlyBSD - Bug #3346 (New): kernel panic when evdev device is detachedhttps://bugs.dragonflybsd.org/issues/33462023-02-24T10:35:56Zpeeterkaru.pruun@gmail.com
<p><strong>Systems affected</strong>: all kernels with evdev.</p>
<p><strong>Description</strong>: kernel panic when an usb mouse or keyboard is detached.</p>
<p><strong>How to reproduce</strong>: detach a usb device (e.g. mouse, keyboard). However, difficult to reproduce since apparently involves a race, so happens infrequently.</p>
<p><strong>Kernel Backtrace</strong>:</p>
<pre>
#0 _get_mycpu () at ./machine/thread.h:69
#1 panic (fmt=fmt@entry=0xffffffff80c42d28 "%s") at /usr/src/sys/kern/kern_shutdown.c:869
#2 0xffffffff80bda181 in trap_fatal (frame=frame@entry=0xfffff8036a1916a8, eva=eva@entry=0)
at /usr/src/sys/platform/pc64/x86_64/trap.c:1100
#3 0xffffffff80bdafa7 in trap (frame=0xfffff8036a1916a8) at /usr/src/sys/platform/pc64/x86_64/trap.c:786
#4 0xffffffff80b9e3ba in calltrap () at /usr/src/sys/platform/pc64/x86_64/exception.S:319
#5 0xffffffff80645bdd in lockmgr_exclusive (lkp=0x2f3c6d756e766564, flags=flags@entry=2) at /usr/src/sys/kern/kern_lock.c:295
#6 0xffffffff809c7510 in lockmgr (flags=2, lkp=<optimized out>) at /usr/src/sys/sys/lock.h:271
#7 evdev_dtor (data=0xfffff80154636880) at /usr/src/sys/dev/misc/evdev/cdev.c:158
#8 0xffffffff8091b350 in devfs_clear_cdevpriv (fp=0xfffff8034e599100) at /usr/src/sys/vfs/devfs/devfs_core.c:3002
#9 0xffffffff8091d4bc in devfs_fo_close (fp=0xfffff8034e599100) at /usr/src/sys/vfs/devfs/devfs_vnops.c:1234
#10 0xffffffff8062ffcf in fo_close (fp=0xfffff8034e599100) at /usr/src/sys/sys/file2.h:103
#11 fdrop (fp=0xfffff8034e599100) at /usr/src/sys/kern/kern_descrip.c:3103
#12 0xffffffff80630912 in closef (fp=0xfffff8034e599100, p=p@entry=0xfffff801545ff480) at /usr/src/sys/kern/kern_descrip.c:3016
#13 0xffffffff80630b1f in kern_close (fd=19) at /usr/src/sys/kern/kern_descrip.c:1455
#14 0xffffffff80bdbb0e in syscall2 (frame=0xfffff8036a1919f8) at /usr/src/sys/platform/pc64/x86_64/trap.c:1284
#15 0xffffffff80b9ebcd in Xfast_syscall () at /usr/src/sys/platform/pc64/x86_64/exception.S:448
#16 0x000000000000002b in ?? ()
</pre> DragonFlyBSD - Bug #3341 (New): [NVMM] Support AVX (and AVX2) in VMshttps://bugs.dragonflybsd.org/issues/33412023-02-06T08:47:14Zliweitianuxliweitianux@live.com
<p>NVMM currently only exports XSAVE to VMs, but no AVX/AVX2/AVX512 yet. However, some "modern" software require AVX to be able to run ...</p>
<p>To support AVX in NVMM, the FPU code must be improved/refactored first. Check NetBSD's FPU code overhaul. (also by maxv)</p>
<pre>
host# dmesg | grep Features2
Features2=0x77fafbff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,AVX,F16C,RDRND>
vm# dmesg | grep Features2
Features2=0xe6da3203<SSE3,PCLMULQDQ,SSSE3,FMA,CX16,PCID,SSE4.1,SSE4.2,MOVBE,POPCNT,AESNI,XSAVE,F16C,RDRND,VMM>
</pre> DragonFlyBSD - Bug #3335 (New): amd* kernel drivers are not properly detecting Zen3 coreshttps://bugs.dragonflybsd.org/issues/33352023-01-09T08:46:25Zarcade@b1t.namearcade@b1t.name
<p>Hello.</p>
<p>New changes brough some support to newer gen AMD processors. Alas mine is not properly detected:</p>
<p>amdsmn: works</p>
<p>amdsmn0: <AMD Family 19h System Management Network> on hostb0</p>
<p>amdtemp: fails probing</p>
<p>amdtemp0: <AMD CPU On-Die Thermal Sensors> on hostb0<br />amdtemp0: sc_ccd_offset = 00000154<br />amdtemp0: probe ccd sensors 19h 50<br />amdtemp0: probe ccd0 error 0 val=00000000<br />amdtemp0: probe ccd1 error 0 val=00000000<br />amdtemp0: probe ccd2 error 0 val=00000000<br />amdtemp0: probe ccd3 error 0 val=00000000<br />amdtemp0: probe ccd4 error 0 val=00000000<br />amdtemp0: probe ccd5 error 0 val=00000000<br />amdtemp0: probe ccd6 error 0 val=00000000<br />amdtemp0: probe ccd7 error 0 val=00000000</p>
<p>amdsbwd: fails probing</p>
<p>amdsbwd0: <AMD FCH Rev 41h+ Watchdog Timer> at iomem 0xfed80b00-0xfed80b03,0xfed80b04-0xfed80b07 on isa0<br />amdsbwd0: watchdog hardware is disabled<br />device_probe_and_attach: amdsbwd0 attach returned 6<br />sio0: can't drain, serial port might not exist, disabling<br />sio1: can't drain, serial port might not exist, disabling<br />amdsbwd1: <AMD FCH Rev 41h+ Watchdog Timer> at iomem 0xfed80b00-0xfed80b03,0xfed80b04-0xfed80b07 on isa1<br />amdsbwd1: watchdog hardware is disabled<br />device_probe_and_attach: amdsbwd1 attach returned 6</p>
<p>My cpu is: AMD Ryzen 5 PRO 5650GE</p>
<p>Thanks in advance!</p> DragonFlyBSD - Bug #3330 (New): Swappinghttps://bugs.dragonflybsd.org/issues/33302022-09-26T10:49:15Zarcade@b1t.namearcade@b1t.name
<p>Hello. I just want to offload a few ideas about swapping.</p>
<p>First of all, nowadays if you are running off HDD you can consume memory faster then swapout will free it. This means that memory allocation throttling will be kicking in almost all of the time when you are trying to do some memory internsive stuff. The more cores the merrier, average compiler (GCC or CLANG) can easily use up to 2G of RAM reading minimum amount of files, memory will be going out in no time.</p>
<p>This can be partially mitigated by tweaking this:</p>
<p>vm.v_paging_target2=881000<br />vm.v_paging_target1=704800<br />vm.v_paging_start=528600</p>
<p>I just upped them 10x and now swapping out starts before and continues after the lowmem, so throttling happens not that often. Proper way to do that might be autosizing them to _start being 10% of RAM and others accordingly.</p>
<p>Second, vm.v_paging_wait. I see an opportunity here. Right now it's same for all processes, but it can be improved by making it default to 1% RAM + (process priority * 0.01% RAM). This means that processes with higher nice or rtprio values in tough situations would be able to get memory first, and lower priority processes will be throttled more.</p>
<p>Third one. On HDDs swap write and read are showing numbers that differ by a magnitude. Swapout can be up to 40Mb/s or even higher, and swapin is almost always under 2Mb/s. If any huge process was pushed out of RAM it can take minutes to get it back. I think this is because swapin gets individual pages from the drive. Even the oldest HDDs do have a prefetch cache, that works by reading to cache all pages in cylinder before requested page is found. It works pretty nice most of the time, except for swapping, probably random access pattern generates too much prefetch data overwriting everything we might want to read after. This can be partially solved by reading not only the page requested from swap, but rather reading minimum 64k from drive if we are not paging out data. Might improve reading huge processes from swap, probably...</p>
<p>Thanks for your attention! Hope this ideas are really worthy.</p> DragonFlyBSD - Bug #3323 (New): virtio (if_vtnet...) not detected on Hetzner cloud (AMD system)https://bugs.dragonflybsd.org/issues/33232022-08-08T19:15:54Zmneumann
<p>`pciconf -lv` lists a "Virtio network device"</p>
<p><img src="https://bugs.dragonflybsd.org/attachments/download/1724/clipboard-202208082105-nspru.png" alt="" loading="lazy" /></p>
<p>But it does not show up under `ifconfig`. Also the virtio SCSI harddisk is not detected.</p>
<p>Just curious if I am doing anything wrong.</p>