DragonFlyBSD bugtracker: Issueshttps://bugs.dragonflybsd.org/https://bugs.dragonflybsd.org/favicon.ico?16293952082017-12-17T19:17:50ZDragonFlyBSD bugtracker
Redmine DragonFlyBSD - Bug #3113 (In Progress): Booting vKernel fails due being out of swap spacehttps://bugs.dragonflybsd.org/issues/31132017-12-17T19:17:50Ztcullen
<p>The step by step directions at <a class="external" href="https://www.dragonflybsd.org/docs/handbook/vkernel/">https://www.dragonflybsd.org/docs/handbook/vkernel/</a> fail because when you attempt to boot the vkernel you never get a shell prompt because the shell is instantly and repeatedly killed due to a never ending stream of "out of swap space" error messages.</p> DragonFlyBSD - Bug #2735 (New): iwn panics SYSSASSERThttps://bugs.dragonflybsd.org/issues/27352014-11-14T22:04:08Zcnbcneirabustos@gmail.com
<p>iwn driver panics with SYSASSERT error that is described in this link<br /><a class="external" href="http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html">http://lists.freebsd.org/pipermail/freebsd-wireless/2013-July/003653.html</a></p> DragonFlyBSD - Bug #2499 (In Progress): DRAGONFLY_3_2 lockd not responding correctlyhttps://bugs.dragonflybsd.org/issues/24992013-01-22T08:41:27ZNerzhul
<p>Hello,<br />i must use lockd for concurrent access on a webserver with nfs extended storage. There is some concurrent access and lockd isn't responding correctly.</p>
<p>On the NFSv3 client, timeout appears and console logs:<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd not responding<br />nfs server A.B.C.65:/nfs/fbsd_pkg: lockd is alive again</p>
<p>After "netstat -an -f inet" i see there is a queue on rpc socket</p>
<p>netstat -an -f inet</p>
<p>Active Internet connections (including servers)<br />Proto Recv-Q Send-Q Local Address Foreign Address (state)<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.977 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.611 ESTABLISHED<br />tcp4 0 0 localhost.smtp <strong>.* LISTEN<br />tcp4 0 0 *.ssh *.</strong> LISTEN<br />tcp4 0 0 <strong>.1017 *.</strong> CLOSED<br />tcp4 0 0 <strong>.1020 *.</strong> LISTEN<br />tcp4 0 0 <strong>.nfsd *.</strong> LISTEN<br />tcp4 0 0 <strong>.1023 *.</strong> LISTEN<br />tcp4 0 0 <strong>.1022 *.</strong> LISTEN<br />tcp4 0 0 <strong>.sunrpc *.</strong> LISTEN<br />tcp4 0 0 A.B.C.65.nfsd A.B.C.96.811 ESTABLISHED<br />tcp4 0 0 A.B.C.65.nfsd WebCluster1.972 ESTABLISHED<br />tcp4 0 48 A.B.C.65.ssh 129.175.196.190.60067 ESTABLISHED<br />udp4 0 0 <strong>.918 *.</strong> <br />udp4 0 0 A.B.C.65.1028 ntp.u-psud.fr.ntp <br />udp4 456 0 <strong>.1017 *.</strong> <br />udp4 18656 0 <strong>.1018 *.</strong> <br />udp4 0 0 <strong>.nfsd *.</strong> <br />udp4 0 0 <strong>.1021 *.</strong> <br />udp4 0 0 <strong>.1020 *.</strong> <br />udp4 0 0 <strong>.1022 *.</strong> <br />udp4 0 0 <strong>.sunrpc *.</strong></p>
<p>When i see that, i make tcpdump -nni em0 to see what's happening:</p>
<p>22:12:42.781597 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:48.801935 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:12:54.669917 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212<br />22:13:00.148965 IP 10.117.100.95.961 > 10.117.100.65.1017: UDP, length 212</p>
<p>After a little time, lockd respond to all request, but many failed because of timeout</p>
<p>On the dragonflyBSD server i can see this in /var/log/messages</p>
<p>Jan 21 22:14:19 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:19 webfiler1 last message repeated 3 times<br />Jan 21 22:14:19 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.<br />Jan 21 22:14:29 webfiler1 dntpd<sup><a href="#fn571">571</a></sup>: issuing offset adjustment: 0.026637s<br />Jan 21 22:14:44 webfiler1 rpc.lockd: rpc to statd failed: RPC: Timed out<br />Jan 21 22:14:44 webfiler1 rpc.lockd: duplicate lock from WebCluster1.srv.<br />Jan 21 22:14:44 webfiler1 last message repeated 3 times<br />Jan 21 22:14:44 webfiler1 rpc.lockd: no matching entry for WebCluster1.srv.</p>
<p>I think there is a problem on DragonFlyBSD which queue many lockd requests.</p> DragonFlyBSD - Bug #2423 (New): After multiple panics/locks, hitting KKASSERT in hammer_init_cursorhttps://bugs.dragonflybsd.org/issues/24232012-09-17T17:12:40Zrumcic
<p>After quite a few locks (machine stopped responding, nothing on serial console, unable to panic the machine), the hammer FS (v6) got a corrupted(?) UNDO.</p>
<p>Booting the latest snapshot and mounting with ro,noatime it looked as if it was able to rerun the UNDO/FIFO. But trying to mount the "clean" FS result in triping over a KKASSERT (error == 0 at hammer_cursor.c:202)</p> DragonFlyBSD - Bug #2396 (Feedback): Latest 3.1 development version core dumps while destroying m...https://bugs.dragonflybsd.org/issues/23962012-07-18T10:50:26Zsgeorgesgeorge.ml2@gmail.com
<p>Hi,</p>
<p>I was destroying a master PFS on the ROOT volume and the system ( v3.1.0.827.gf6167a5-DEVELOPMENT )core dumped.<br />I tried today's latest snapshot and got the same result.<br />The Coredump is uploaded to sgeorge@leaf:~/crash/Coredump20120718.tbz</p>
<p>panic: assertion "layer2->zone == zone" failed in hammer_blockmap_free at /usr/src/sys/vfs/hammer/hammer_blockmap.c:1020<br />cpuid = 0<br />Trace beginning at frame 0xffffffe09e20f178<br />panic() at panic+0x1fb 0xffffffff804bef68 <br />panic() at panic+0x1fb 0xffffffff804bef68 <br />hammer_blockmap_free() at hammer_blockmap_free+0x2e5 0xffffffff80691a0c <br />hammer_delete_at_cursor() at hammer_delete_at_cursor+0x4e2 0xffffffff806aac62 <br />hammer_pfs_rollback() at hammer_pfs_rollback+0x26c 0xffffffff806b0b20 <br />hammer_ioc_destroy_pseudofs() at hammer_ioc_destroy_pseudofs+0x77 0xffffffff806b0c6c <br />hammer_ioctl() at hammer_ioctl+0x80e 0xffffffff806a5b1e <br />hammer_vop_ioctl() at hammer_vop_ioctl+0x58 0xffffffff806be8d3 <br />vop_ioctl() at vop_ioctl+0x98 0xffffffff8053d244 <br />vn_ioctl() at vn_ioctl+0xfd 0xffffffff8053a4d9 <br />fo_ioctl() at fo_ioctl+0x46 0xffffffff804f026e <br />mapped_ioctl() at mapped_ioctl+0x493 0xffffffff804f0725 <br />sys_ioctl() at sys_ioctl+0x1c 0xffffffff804f07be <br />syscall2() at syscall2+0x370 0xffffffff807814c1 <br />Xfast_syscall() at Xfast_syscall+0xcb 0xffffffff8076ae2b <br />(null)() at 0 0 <br />(null)() at 0x723d524553550061 0x723d524553550061</p>
<p>Fatal trap 9: general protection fault while in kernel mode<br />cpuid = 0; lapic->id = 00000000<br />instruction pointer = 0x8:0xffffffff8077acf9<br />stack pointer = 0x10:0xffffffe09e20f010<br />frame pointer = 0x10:0xffffffe09e20f028<br />code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 0, def32 0, gran 1<br />processor eflags = interrupt enabled, resume, IOPL = 0<br />current process = 957<br />current thread = pri 10 <br />kernel: type 9 trap, code=0</p>
<p>CPU0 stopping CPUs: 0x00000002<br /> stopped<br />Physical memory: 3787 MB<br />Dumping 1055 MB: 1040 1024 1008 992 976 960 944 928 912 896 880 864 848 832 816 800 784 768 752 736 720 704 688 672 656 640 624 608 592 576 560 544 528 512 496 480 464 448 432 416 400 384 368 352 336 320 304 288 272 256 240 224 208 192 176 160 144 128 112 96 80 64 48 32 16</p> DragonFlyBSD - Bug #2347 (Feedback): Hammer PFSes destroy does not give back full space allocated...https://bugs.dragonflybsd.org/issues/23472012-04-11T07:17:48Zsgeorgesgeorge.ml2@gmail.com
<p>I was mirroring PFSes from 3.1 dev to slaves in 3.02 and I found that<br />the PFSes took more space on the 3.02 slave.<br />Investigating I found this strange thing</p>
<p>94 GB is allocated for this slave PFS. But when it is removed only 52<br />GB is freed :-(</p>
<p>dfly-bkpsrv2# hammer dedup /pfs/software<br />Dedup running<br />Dedup /pfs/software succeeded<br />Dedup ratio = 1.06<br /> 100 GB referenced<br /> 94 GB allocated<br /> 4339 KB skipped<br /> 429 CRC collisions<br /> 0 SHA collisions<br /> 1 bigblock underflows<br /> 0 new dedup records<br /> 0 new dedup bytes</p>
<p>dfly-bkpsrv2# df -h<br />Filesystem Size Used Avail Capacity Mounted on<br />ROOT 459G 354G 106G 77% /<br />devfs 1.0K 1.0K 0B 100% /dev<br />/dev/serno/QM00001.s1a 756M 168M 527M 24% /boot<br />/pfs/<code>@-1:00001 459G 354G 106G 77% /var<br />/pfs/</code>@-1:00002 459G 354G 106G 77% /tmp<br />/pfs/<code>@-1:00003 459G 354G 106G 77% /usr<br />/pfs/</code>@-1:00004 459G 354G 106G 77% /home<br />/pfs/<code>@-1:00005 459G 354G 106G 77% /usr/obj<br />/pfs/</code>@-1:00006 459G 354G 106G 77% /var/crash<br />/pfs/<code>@-1:00007 459G 354G 106G 77% /var/tmp<br />procfs 4.0K 4.0K 0B 100% /proc<br />dfly-bkpsrv2# ls<br />home software usr var<br />var.tmp vms2-lxc<br />mysql-baks tmp usr.obj var.crash vms1-lxc<br />dfly-bkpsrv2# hammer pfs-destroy /pfs/software<br />You have requested that PFS#11 () be destroyed<br />This will irrevocably destroy all data on this PFS!!!!!<br />Do you really want to do this? y<br />Destroying PFS #11 () in 5 4 3 2 1.. starting destruction pass<br />pfs-destroy of PFS#11 succeeded!<br />dfly-bkpsrv2# df -h<br />Filesystem Size Used Avail Capacity Mounted on<br />ROOT 459G 302G 158G 66% /<br />devfs 1.0K 1.0K 0B 100% /dev<br />/dev/serno/QM00001.s1a 756M 168M 527M 24% /boot<br />/pfs/</code>@-1:00001 459G 302G 158G 66% /var<br />/pfs/<code>@-1:00002 459G 302G 158G 66% /tmp<br />/pfs/</code>@-1:00003 459G 302G 158G 66% /usr<br />/pfs/<code>@-1:00004 459G 302G 158G 66% /home<br />/pfs/</code>@-1:00005 459G 302G 158G 66% /usr/obj<br />/pfs/<code>@-1:00006 459G 302G 158G 66% /var/crash<br />/pfs/</code>@-1:00007 459G 302G 158G 66% /var/tmp<br />procfs 4.0K 4.0K 0B 100% /proc</p> DragonFlyBSD - Bug #2296 (In Progress): panic: assertion "m->wire_count > 0" failedhttps://bugs.dragonflybsd.org/issues/22962012-02-02T06:56:02Zthomas.nikolajsen
<p>With recent (29/1-12) master and rel3_0 I get this panic<br />during parallel make release and buildworld, e.g.:<br />'make MAKE_JOBS=10 release' (i.e. make -j10)<br />i386 STANDARD<br />(custom kernel, includes INCLUDE_CONFIG_FILE)<br />on 8 core host (opteron).</p>
<p>Got this panic twice; succeeds w/o MAKE_JOBS</p>
<p>Core dump at leaf: ~thomas:crash/octopus.i386.3</p> DragonFlyBSD - Bug #2141 (New): loader and/or documentation brokenhttps://bugs.dragonflybsd.org/issues/21412011-10-09T14:15:29Zsjg
<p>For example,</p>
<pre><code>The ehci driver is automatically loaded upon boot. To disable this<br /> behavior temporarily, the ehci_load variable can be unset at the loader<br /> prompt (see loader(8)). To disable it permanently, the<br /> hint.ehci.0.disabled tunable can be set to 1 in /boot/loader.conf.</code></pre>
<p>But when operating from the loader prompt the ehci_load variable has no effect <br />at all, it seems to only be checked from the menu, which is useless if you are <br />operating from the prompt.</p>
<p>This is confusing at best, but I am leaning more towards steaming pile. The <br />loader or the documentation needs to be reworked.</p> DragonFlyBSD - Bug #2140 (New): hammer_io_delallocate panic with 'duplicate entry' messagehttps://bugs.dragonflybsd.org/issues/21402011-10-06T20:04:53Zttw
<p>Getting regular panics on HAMMER FSs when transferring large quantities of data <br />over the network. initally saw the issue with NFS but eliminated that and now <br />seeing it with 'tar' piped across 'ssh'. was most prevelant when using USB or <br />FW storage but most recent test to internal IDE drive has thrown up the same <br />error.</p>
<p>NB: transfers are generally across unreliable links (i.e. wireless) and oft-<br />times, long stalls (i.e. minutes) occur -- most recent crash was after i had <br />paused a 'tar/ssh' transfer using CTL-Z.</p>
<p>NB: turning on crashdumps and re-executing tests but may take hours to fail - <br />any pointers in the meantime would be excellent.</p> DragonFlyBSD - Bug #2117 (New): ACPI and/or bce(4) problem with 2.11.0.673.g0d557 on HP DL380 G6https://bugs.dragonflybsd.org/issues/21172011-08-18T16:40:24Zpauska
<p>I got a standard HP Proliant DL380 G6 server with a built-in quad broadcom NIC.</p>
<p>2.10 didn't have the updated bcn drivers, so I installed the 2.11.0.673 snapshot <br />to get connectivity.</p>
<p>First, the ACPI error (also present in 2.10):<br />[ACPI Debug] String [0xB] "_TMP Method"</p>
<p>This message repeats 60 times every 10 minutes. I have no idea what it means, <br />googling for it only points me at a NetBSD discussion from 2009.</p>
<p>Secondly, the bcn driver (or perhaps atapci?):<br />interrupt total rate<br />sio2 0 0<br />sio0 0 0<br />acpi0 12125 0<br />bce0 1547359 26<br />bce1/atapci0 2293301893 39875 <-- ouch?<br />bce2 0 0<br />bce3 0 0<br />uhci0/ehci0 1 0<br />uhci2/uhci4 34 0<br />uhci1/uhci3 44 0<br />ciss0 267683 4<br />swi_siopoll 0 0<br />swi_cambio 267762 4<br />swi_vm 0 0<br />swi_taskq/swi_mp_taskq 25 0<br />Total 2295396926 39911</p>
<p>The weird part is that I dont have any ATA devices in use, there's only a CD-<br />rom. bcn1 isnt configured or marked up, only bcn0 is in use.</p>
<p>The deal breaker here is that I can't do anything disk intensive without getting <br />a crash. I tried updating pkgsrc yesterday, and here are two examples:</p>
[snip]
* [new branch] dragonfly-2010Q3 -> origin/dragonfly-2010Q3
<ul>
<li>Signal 10<br />Stop in /usr.<br />[snip]</li>
</ul>
[snip]
* [new branch] master -> origin/master<br />Bus error (core dumped)
<ul>
<li>Error code 1<br />Stop in /usr.<br />[snip]</li>
</ul>
<p>While getting these errors messages like this flooded dmesg:<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />[ACPI Debug] String [0xB] "_TMP Method" <br />intr 16 at 882/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />pid 34805 (git), uid 0: exited on signal 10 (core dumped)<br />intr 16 at 3225/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />intr 16 at 751/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />intr 16 at 765/20000 hz, livelock removed<br />intr 16 at 40001/40000 hz, livelocked limit engaged!<br />intr 16 at 795/20000 hz, livelock removed<br />[ACPI Debug] String [0xB] "_TMP Method"</p>
<p>I'm not familiar with debugging this, so please let me know if you need more <br />info. I can also put the server in the DMZ and give a developer SSH access if <br />needed.</p> DragonFlyBSD - Bug #2071 (New): Panic on assertion: (int)(flg->seq - seq) > 0 in hammer_flusher_f...https://bugs.dragonflybsd.org/issues/20712011-05-14T19:20:34Zvsrinivasvsrinivas@ops101.org
<p>ad10: TIMEOUT - WRITE_DMA48 retrying (1 retry left) LBA=1500424384<br />ad10: FAILURE - WRITE_DMA48 status=51<READY,DSC,ERROR> error=10<NID_NOT_FOUND><br />LBA=1500424384<br /><abbr title="NEWTANK">HAMMER</abbr>: Critical error inode=-1 error=5 while flushing meta-data<br /><abbr title="NEWTANK">HAMMER</abbr>: Forcing read-only mode<br />hammer: debug: forcing async flush ip 000000032433ccec<br />hammer: debug: forcing async flush ip 000000032433ccec<br />panic: assertion: (int)(flg->seq - seq) > 0 in hammer_flusher_flush<br />Trace beginning at frame 0xcbfadd10<br />panic(ffffffff,c07ace20,c064b39b,cbfadd40,cbfa00fc) at panic+0x101<br />panic(c064b39b,c067760e,c06353b6,0,cbfa0108) at panic+0x101<br />hammer_flusher_master_thread(cbfa0000,0,0,0,0) at hammer_flusher_master_thread+0x14e<br />lwkt_exit() at lwkt_exit<br />Uptime: 5d16h20m44s<br />Physical memory: 502 MB<br />Dumping 205 MB: 190 174 158 142 126 110 94 78 62 46 30 14</p>
<p>After an inode failure, HAMMER paniced on an assertion in hammer_flusher_flush.<br />ad10 was a WD EARS 1.5TB disk attached to a Silicon Image 3114 controller,<br />running via nata. The only I/O to the disk was a mirror-stream.</p>
<p>core and kernel are on leaf.dragonflybsd.org, /home/vsrinivas/hammerpanic.</p> DragonFlyBSD - Bug #1920 (New): system hangshttps://bugs.dragonflybsd.org/issues/19202010-11-22T16:59:00Zzhtwroot@zta.lk
<p>System hangs once in a while leaving on the console these messages:</p>
<p>====<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed<br />pfr_unroute_kentry: delete failed</p>
<p>panic: IP header no in one mbuf<br />mp_lock = ffffffff; cpuid=1<br />Trace beginning at frame 0xffffffe03d9b0a60<br />panic() at panic+0x239<br />panic() at panic+0x239<br />ip_input() at ip_input+0x153<br />ip_input_handler() at ip_input_handler+0xd<br />netmsg_service_lookp() at netmsg_service_loop_0x6f<br />Debugger("panic")</p>
<p>CPU1 stopping CPU: 0x00000001<br />stopped<br />Stopped at Debugger+0x39 movh $0,0x387060(%rid)
====</p>
<p>I couldn't grub this from the screen and copied it by hands, so there can be typos.</p>
<ol>
<li>uname -a<br />DragonFly chinua.zzz.umc8.ru 2.8-RELEASE DragonFly v2.8.2.38.gb1139-RELEASE <a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a>:<br />Mon Nov 22 10:06:11 MSK 2010 <br /><a class="email" href="mailto:root@chinua.zzz.umc8.ru">root@chinua.zzz.umc8.ru</a>:/usr/obj/usr/src/sys/X86_64_GENERIC_SMP x86_64</li>
</ol>
<p>The dmesg.boot is in attach</p> DragonFlyBSD - Bug #1185 (New): need a tool to merge changes into /etchttps://bugs.dragonflybsd.org/issues/11852008-12-20T07:47:08Zwa1ter
<p>I run mergemaster occasionally because it catches files that<br />make upgrade misses. Today it misses about half a dozen or<br />so, including aliases, ftpusers, /usr/Makefile, and maybe a<br />few others I can't remember.</p>
<p>Thanks.</p> DragonFlyBSD - Bug #884 (In Progress): Performance/memory problems under filesystem IO loadhttps://bugs.dragonflybsd.org/issues/8842007-12-14T18:34:28Zhasso
<p>During testing drive with dd I noticed that there are serious performance <br />problems. Programs which need disk access, block for 10 and more seconds. <br />Sometimes they don't continue the work until dd is finished. Raw disk access <br />(ie not writing to file, but directly to the disk) is reported to be OK (I <br />can't test it myself).</p>
<p>All tests are done with this command:<br />dd if=/dev/zero of=./file bs=4096k count=1000</p>
<p>Syncing after each dd helps to reproduce it more reliably (cache?).</p>
<p>There is one more strange thing in running these tests. I looked at memory <br />stats in top before and after running dd.</p>
<p>Before:<br />Mem: 42M Active, 40M Inact, 95M Wired, 304K Cache, 53M Buf, 795M Free<br />After:<br />Mem: 70M Active, 679M Inact, 175M Wired, 47M Cache, 109M Buf, 1752K Free</p>
<p>And as a side effect - I can't get my network interfaces up any more after <br />running dd - "em0: Could not setup receive strucutres".</p> DragonFlyBSD - Bug #599 (New): 1.9.0 reproducable panichttps://bugs.dragonflybsd.org/issues/5992007-04-11T10:24:26Zpavalos
<p>Here's a panic I'm getting with some pretty serious network (www) load, then <br />doing a netstat -an:</p>
<p>Unread portion of the kernel message buffer:<br />panic: m_copydata, negative off -1<br />mp_lock = 00000000; cpuid = 0; lapic.id = 00000000<br />boot() called on cpu#0</p>
<p>syncing disks... 5<br />done<br />Uptime: 12d22h0m32s</p>
<p>(kgdb) bt<br />#0 dumpsys () at thread.h:83<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: lib/libcr/sys/ cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/1">#1</a> 0xc01954bb in boot (howto=256) at /usr/src/sys/kern/kern_shutdown.c:370<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: K&R -> ANSI cleanup status (Closed)" href="https://bugs.dragonflybsd.org/issues/2">#2</a> 0xc01957c0 in panic (fmt=Variable "fmt" is not available.<br />) at /usr/src/sys/kern/kern_shutdown.c:767<br /><a class="issue tracker-1 status-5 priority-4 priority-default closed" title="Bug: freebsds pipe-reverse test fails on dfly (Closed)" href="https://bugs.dragonflybsd.org/issues/3">#3</a> 0xc01c3a32 in m_copydata (m=0x0, off=0, len=0, cp=0xee9534b0 "\001\001<br />\b\n\006¦*$\035\bͬ") at /usr/src/sys/kern/uipc_mbuf.c:1014<br /><a class="issue tracker-1 status-5 priority-5 priority-high3 closed" title="Bug: Rework of nrelease (Closed)" href="https://bugs.dragonflybsd.org/issues/4">#4</a> 0xc020fc25 in tcp_output (tp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_output.c:690<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/dev cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/5">#5</a> 0xc02152bf in tcp_timer_persist (xtp=0xdae0c720) <br />at /usr/src/sys/netinet/tcp_timer.c:363<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: sys/emulation cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/6">#6</a> 0xc01a6423 in softclock_handler (arg=0xc0386a80) <br />at /usr/src/sys/kern/kern_timeout.c:307<br /><a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: /sys/boot cleanup (Closed)" href="https://bugs.dragonflybsd.org/issues/7">#7</a> 0xc019d037 in lwkt_deschedule_self (td=Variable "td" is not available.<br />) at /usr/src/sys/kern/lwkt_thread.c:207<br />Previous frame inner to this frame (corrupt stack?)</p>
<p>The kernel and vmcore is being uploaded to leaf. The source is from March 28.</p>
<p>--Peter</p>