DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082015-06-13T23:21:11ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #2828 (New): On AMD APUs and Bulldozer CPUs, the machdep.cpu_idle_hlt sysctl s...https://bugs.dragonflybsd.org/issues/28282015-06-13T23:21:11Zvadaszi
<p>Power usage of a default install is unnecessarily high on current AMD CPUs. Setting the default value of the machdep.cpu_idle_hlt sysctl on these CPUs to 3 by default allows for significant power savings.</p>
<p>I'm not sure how setting machdep.cpu_idle_hlt=3 affects power usage on AMD Family 10h CPUs (e.g. Phenom CPUs).</p>
<p>Some quick benchmarking should be done if possible, to compare the performance difference.</p> DragonFlyBSD - Bug #2577 (New): virtio-blk iops performance is cpu limited on high end deviceshttps://bugs.dragonflybsd.org/issues/25772013-08-01T20:59:44Zgjs278gjs278@yahoo.com
<p>Qemu 1.5.2 on Gentoo AMD64 kernel 3.10.4 host with an i7 980x processor at 4.2ghz</p>
<p>qemu-system-x86_64 -machine accel=kvm -cpu host -drive file=/dev/fioa3,if=virtio,cache=none,aio=native -balloon virtio -smp 6 -m 6144M</p>
<p>/dev/fioa3 is a 160gb slc fusion-io card</p>
<p>DragonFlyBSD 3.4.2-RELEASE is the guest OS</p>
<ol>
<li>/tmp/rr1 /dev/vbd0<br />Device /dev/vbd0 bufsize 512 limit 10.800GB nprocs 32<br />randrand 1.001s 24293 loops = 41.202uS/loop<br />randrand 1.002s 24384 loops = 41.072uS/loop<br />randrand 1.001s 24633 loops = 40.640uS/loop</li>
</ol>
<ol>
<li>/tmp/rr1 /dev/vbd0 4096<br />Device /dev/vbd0 bufsize 4096 limit 10.800GB nprocs 32<br />randrand 1.001s 24333 loops = 41.119uS/loop<br />randrand 1.002s 24389 loops = 41.052uS/loop<br />randrand 1.001s 24367 loops = 41.093uS/loop</li>
</ol>
<ol>
<li>/tmp/rr1 /dev/vbd0 16384<br />Device /dev/vbd0 bufsize 16384 limit 10.800GB nprocs 32<br />randrand 1.001s 21006 loops = 41.619uS/loop<br />randrand 1.002s 21167 loops = 41.348uS/loop<br />randrand 1.001s 20520 loops = 48.850uS/loop</li>
</ol>
<p>cpu usage on the host nears 100% while /tmp/rr1 is running. at nprocs 32, the device should be capable of at least 100k iops. the same 25k limit is seen using an ssd array as well.</p> DragonFlyBSD - Submit #2438 (Feedback): TRIM fixeshttps://bugs.dragonflybsd.org/issues/24382012-10-22T04:59:20ZAnonymous
<p>This patch is to fix bugs associated with TRIM.</p>
<p>If trim is on as a option, display that when typing "mount".</p>
<p>Change post-trim ffs_blkfree_cg() to use taskqueue_swi_mp and get mp token when modifying freemap.</p>
<p>Make sure TRIM works with softdep. Stash a copy of that vnode's mount point in the ufs inode so that if we are using softdep, we can get access to the mount point through the faked up inode (created in freeblocks). The original mount point path (ip->i_devvp->v_mount->mnt_flag) doesn't have the mount point options.</p>
<p>Tim</p> DragonFlyBSD - Bug #2391 (In Progress): System lock with ahci and acpi enabled on ATI RS690 chips...https://bugs.dragonflybsd.org/issues/23912012-06-24T20:41:54Zjorisgiojoris@giovannangeli.fr
<p>Page fault during boot with ahci and acpi.<br />Kernel boot without ahci but locks in userspace after a fixed time.<br />Page fault without acpi but with ahci.</p> DragonFlyBSD - Bug #2370 (New): panic: ffs_valloc: dup allochttps://bugs.dragonflybsd.org/issues/23702012-05-16T21:40:41Zmarino
<p>core text file located: <a class="external" href="http://leaf.dragonflybsd.org/~marino/core/core.ffs_valloc_dup_alloc.txt">http://leaf.dragonflybsd.org/~marino/core/core.ffs_valloc_dup_alloc.txt</a><br />core dump located on leaf, ~/marino/crash</p>
<p>uname: DragonFly v3.1.0.634.gc6fd7-DEVELOPMENT <a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a>: Sat May 5 09:02:18 CEST 2012 <a class="email" href="mailto:root@dracofly.synsport.com">root@dracofly.synsport.com</a>:/usr/obj/usr/src/sys/GENERIC</p>
<p>backtrace:<br />Unread portion of the kernel message buffer:<br />mode = 041777, inum = 4, fs = /mech<br />panic: ffs_valloc: dup alloc<br />cpuid = 1<br />Trace beginning at frame 0xe4b8b838<br />panic(ffffffff,1,c07290b3,e4b8b86c,d867f860) at panic+0x1a8 0xc039af00 <br />panic(c07290b3,43ff,4,defc10d4,e4b8b8f0) at panic+0x1a8 0xc039af00 <br />ffs_valloc(dee285d8,81a4,e00929e0,e4b8b8f0,c82dcc10) at ffs_valloc+0x518 0xc0542ad4 <br />ufs_makeinode(e4b8ba8c,d867f97c,dee285d8,dee67460,e4b8ba2c) at ufs_makeinode+0x71 0xc0556327 <br />ufs_create(e4b8ba38,e4b8ba68,c04139c0,e4b8ba38,c07d5b38) at ufs_create+0x2c 0xc0556692 <br />ufs_vnoperate(e4b8ba38,c07d5b38,dee67460,dee67460,c04043ef) at ufs_vnoperate+0x16 0xc0555bb6 <br />vop_old_create(dee67460,dee285d8,e4b8bbf0,e4b8ba8c,e4b8bb44) at vop_old_create+0x5b 0xc04139c0 <br />vop_compat_ncreate(e4b8badc,e4b8bad0,c0555bb6,e4b8badc,e4b8bb10) at vop_compat_ncreate+0x11d 0xc03f9813 <br />vop_defaultop(e4b8badc,e4b8bb10,c0412061,e4b8badc,c07d5c98) at vop_defaultop+0x16 0xc03f7f72 <br />ufs_vnoperate(e4b8badc,c07d5c98,dee67460,c7e6f2f8,0) at ufs_vnoperate+0x16 0xc0555bb6 <br />vop_ncreate(dee67460,e4b8bc74,dee285d8,e4b8bbf0,e00929e0) at vop_ncreate+0x64 0xc0412061 <br />vn_open(e4b8bc74,e3731f08,602,1a4,c77f211c) at vn_open+0x183 0xc0410eab <br />kern_open(e4b8bc74,601,1b6,e4b8bcf0,e372f7a8) at kern_open+0xa3 0xc040e0c7 <br />sys_open(e4b8bcf0,e4b8bd00,c,c03a6652,d867f860) at sys_open+0x54 0xc040e3a9 <br />syscall2(e4b8bd40) at syscall2+0x270 0xc067745c <br />Xint0x80_syscall() at Xint0x80_syscall+0x36 0xc0646466 <br />Debugger("panic")</p> DragonFlyBSD - Bug #2113 (New): nmalloc threaded program fork leakhttps://bugs.dragonflybsd.org/issues/21132011-08-12T02:25:48Zvsrinivasvsrinivas@ops101.org
<p>When a threaded program forks, magazines held by threads other than the forkee<br />should be released along with their contents. Currently we leak those buffers.</p> DragonFlyBSD - Bug #1587 (Feedback): can't gdb across forkhttps://bugs.dragonflybsd.org/issues/15872009-10-25T23:56:18Zcorecode
<p>When the debugged process performs a fork(), gdb/ptrace won't notice and <br />will not be able to remove breakpoints in the new child. When the child <br />then hits a breakpoint, it will receive a SIGTRAP and dump core.</p>
<p>gdb needs to be aware of forks, so that it will be able to remove the <br />breakpoints in the child.</p>
<p>Test: gdb sh and break stalloc.</p> DragonFlyBSD - Bug #1579 (Feedback): dfly 2.4.1 does not like HP DL360G4p and Smart Array 6400 wi...https://bugs.dragonflybsd.org/issues/15792009-10-19T20:57:34Ztomaz.borstnar
<p>Hello!</p>
<pre><code>I have a HP DL360G4p machine with 5 gigs of RAM, internal SATA disk and HP SmartArray 6400 controller (2 channels) with <br />6400EM card (additional 2 channels attached as daughter card) with MSA20 enclosure with SATA disks - configured as 4 <br />volumes. Install went fine, but dmesg prints some errors (full dmesg output attached), but here are relevant parts:</code></pre>
<p>bge0: <Broadcom BCM5704C Dual Gigabit Ethernet> mem 0xfddf0000-0xfddfffff irq 7 at device 2.0 on pci2<br />bge0: CHIP ID 0x21000000; ASIC REV 0x02; CHIP REV 0x21; PCI-X<br />alignment check failed<br />miibus0: <MII bus> on bge0<br />brgphy0: <BCM5704 10/100/1000baseT PHY> on miibus0<br />brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto<br />bge0: MAC address: 00:15:60:0f:b0:e8<br />bge1: <Broadcom BCM5704C Dual Gigabit Ethernet> mem 0xfdde0000-0xfddeffff irq 7 at device 2.1 on pci2<br />bge1: CHIP ID 0x21000000; ASIC REV 0x02; CHIP REV 0x21; PCI-X<br />alignment check failed<br />alignment check failed<br />miibus1: <MII bus> on bge1<br />brgphy1: <BCM5704 10/100/1000baseT PHY> on miibus1<br />brgphy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto<br />bge1: MAC address: 00:15:60:0f:b0:e7</p>
<p>And...</p>
<p>CAM: Configuring 5 busses<br />CAM: finished configuring all busses (-2 left)<br /><b>WARNING</b> waiting for the following device to finish configuring:<br /> xpt: func=0xc015423d arg=0<br /><b>WARNING</b> waiting for the following device to finish configuring:<br /> xpt: func=0xc015423d arg=0<br /><b>WARNING</b> waiting for the following device to finish configuring:<br /> xpt: func=0xc015423d arg=0<br /><b>WARNING</b> waiting for the following device to finish configuring:<br /> xpt: func=0xc015423d arg=0<br /><b>WARNING</b> waiting for the following device to finish configuring:<br /> xpt: func=0xc015423d arg=0<br />Giving up, interrupt routing is probably hosed</p>
<p>But ciss is seen:<br />ciss0: <HP Smart Array 6400> port 0x5000-0x50ff mem 0xfdff0000-0xfdff1fff irq 7 at device 4.0 on pci11<br />ciss1: <HP Smart Array 6400 EM> port 0x5400-0x54ff mem 0xfdf70000-0xfdf71fff irq 5 at device 5.0 on pci11</p>
<p>Anything I can help to test?</p>
<p>Tomaž</p> DragonFlyBSD - Bug #1428 (Feedback): POSIX.1e implementation is too oldhttps://bugs.dragonflybsd.org/issues/14282009-07-17T02:08:53Zhasso
<p>Our posix1e(3) implementation is too old and not compatible with any <br />recent enough piece of software.</p> DragonFlyBSD - Bug #1397 (Feedback): jobs -l output inconsistency when called from scripthttps://bugs.dragonflybsd.org/issues/13972009-06-08T00:49:49ZAnonymous
<p>Salute.</p>
<p>The jobs(1) utility gives different output when called from a script and when<br />from an interactive shell.</p>
<pre>
[beket@voyager ~] cat testjobs.sh
#!/bin/sh
sleep 30 &
jobs -l
[beket@voyager ~] sh testjobs.sh
[1] + 10005 Running
[beket@voyager ~] sleep 30 &
[1] 10006
[beket@voyager ~] jobs -l
[1]+ 10006 Running sleep 30 &
[beket@voyager ~]
</pre>
<p>It is not clear whether the jobs(1) should work at all inside a script. POSIX<br />says that since it doesn't fall into the 'special' built-in category a new<br />environment (subshell?) would be created upon its invocation. Even this is true,<br />the jobs aren't specific to the shell environment, so they should be visible to<br />jobs(1). And in any case, the command should either print nothing or print all<br />the fields.</p>
<p>NetBSD 5.0:<br /><pre>
$ sh testjobs.sh
[1] + 27159 Running sleep 30
</pre></p>
<p>SunOS 5.10:<br /><pre>
tuxillo@solaris$ /usr/xpg4/bin/sh testjobs.sh
[1] + 11754 Running <command unknown>
</pre></p>
<p>FreeBSD: same as us. (kindly reported by vstemen at #dragonflybsd).</p>
<p>Any thoughts ?</p>
<p>Best regards,<br />Stathis</p> DragonFlyBSD - Bug #1287 (Feedback): altq configuration doesn't workhttps://bugs.dragonflybsd.org/issues/12872009-02-18T20:47:35Zcorecode
<p>There seem to be serious issues with ALTQ.</p>
<p>Just now I tried to disable it, by removing altq queue definitions from pf.conf<br />+ resync, by disabling pf and also by flushing all pf data. Nevertheless it<br />kept on doing bandwidth shaping. Had to reboot to get rid of this.</p>
<p>Also I had the feeling that changing the altq queue parameters + resync did not<br />have any effect on the queueing behavior (changed from 560kbps -> 5kbps), but I<br />could be mistaken in that regard.</p> DragonFlyBSD - Bug #911 (Feedback): kldload/kernel linker can exceed malloc reserve and panic systemhttps://bugs.dragonflybsd.org/issues/9112008-01-11T06:31:28Zcorecode
<p>hey,</p>
<p>I just booted with hw.physmem=64m and got a panic when trying to load a module:</p>
<p>panic: kld: malloc limit exceeded</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc018450c in boot (howto=256) at /usr/build/src/sys/kern/kern_shutdown.c:375<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc0184661 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/build/src/sys/kern/kern_shutdown.c:800<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc0182129 in kmalloc (size=78, type=0xc02f5600, flags=2)<br /> at /usr/build/src/sys/kern/kern_slaballoc.c:445<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc01678b4 in linker_make_file (pathname=0xc641b000 "./nvidia.ko", priv=0xc63ea028, <br /> ops=0xc02f5bc8) at /usr/build/src/sys/kern/kern_linker.c:369<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc016a773 in link_elf_load_module (filename=0xc641b000 "./nvidia.ko", result=0xc83efc7c)<br /> at /usr/build/src/sys/kern/link_elf.c:604<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01684e0 in linker_load_file (filename=0xc641b000 "./nvidia.ko", result=0xc83efca8)<br /> at /usr/build/src/sys/kern/kern_linker.c:272<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc016871c in sys_kldload (uap=0xc83efcf0) at /usr/build/src/sys/kern/kern_linker.c:724</p>
<p>the problem seems to be that M_LINKER already used 10% of all memory (allegedly). In this case kmalloc() simply panics if passing M_WAITOK without M_NULLOK. This is quite unfortunate. Shouldn't we try to stay alive and print a warning and block, hoping that the problem will resolve itself?</p>
<p>cheers<br /> simon</p> DragonFlyBSD - Bug #901 (Feedback): route show needs to get data from all cpushttps://bugs.dragonflybsd.org/issues/9012007-12-31T00:07:48Zcorecode
<p>when executing `route show' on my MP system, I will get varying results. <br />Additionally, it will print all temporary route entries as well, even for hosts<br />outside the local network.</p> DragonFlyBSD - Bug #847 (Feedback): processes getting stuck on mount pointhttps://bugs.dragonflybsd.org/issues/8472007-11-23T07:03:06Zcorecode
<p>Hey,</p>
<p>I just experienced the infamous ``cache_lock: blocked on 0xd591d418 ""'' message. Checking why the process got stuck revealed that the lock is actually being held by another process which is in the process of doing a lstat(2) on /mnt, a nfs mount which server went away. The stuck process is doing the same, fwiw.</p>
<p>So here it is not a namecache bug, but rather an artifact of nfs being stuck. Anoying nevertheless. Anybody have a clue how to fix that? Yea, mount with -intr. Why don't we do that per default?</p>
<p>cheers<br /> simon</p> DragonFlyBSD - Bug #293 (Feedback): Various updates to the handbookhttps://bugs.dragonflybsd.org/issues/2932006-08-11T04:26:06Zvictor
<p>Hi,</p>
<p>there are 3 patches attached:</p>
<p>book.diff - Updates the copyright info relating to FreeBSD at the header<br /> of the handbook.</p>
<p>dfbsd-updating - Update cvsup port path to the current pkgsrc version in<br /> the chapter "Updating DragonFly".</p>
<p>basics.diff - Update various paths relating to pkgsrc and hier(7). Also<br /> make it use the new entity for pkgsrc <br /> tree/collection/framework.</p>