Bug #2284

another sysctl panic

Added by pavalos about 2 years ago. Updated over 1 year ago.

Status:ClosedStart date:
Priority:NormalDue date:
Assignee:-% Done:

0%

Category:-
Target version:-

Description

2 issues here...

Last night I noticed that the network was kind of hosed on ylem. When I
took a look at netstat -m I noticed this:

448371/460416 mbufs in use (current/max):
320/100000 mbuf clusters in use (current/max)
448687 mbufs and mbuf clusters allocated to data
4 mbufs and mbuf clusters allocated to packet headers
224825 Kbytes allocated to network (52% of mb_map in use)
8949083 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

It seems awfully strange to have 450000 mbufs in use, but only 320
clusters.

I then took at look at some sysctls with sysctl -a, and the box paniced:

Fatal trap 9: general protection fault while in kernel mode
cpuid = 1; lapic->id = 02000000
instruction pointer = 0x8:0xffffffff802d79ab
stack pointer = 0x10:0xffffffe12c307970
frame pointer = 0x10:0xffffffe12c3079c8
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 39450
current thread = pri 6
trap number = 9
panic: general protection fault
cpuid = 1
Trace beginning at frame 0xffffffe12c307768
panic() at panic+0x1fb 0xffffffff802bdfad
panic() at panic+0x1fb 0xffffffff802bdfad
trap_fatal() at trap_fatal+0x3d5 0xffffffff8046ffb4
trap() at trap+0x608 0xffffffff804708be
calltrap() at calltrap+0x9 0xffffffff8045a5ff
--- trap 0000000000000009, rip = ffffffff802d79ab, rsp = ffffffe12c307970, rbp = ffffffe12c3079c8 ---
sysctl_root() at sysctl_root+0x114 0xffffffff802d79ab
userland_sysctl() at userland_sysctl+0x167 0xffffffff802d7b34
sys___sysctl() at sys___sysctl+0x7d 0xffffffff802d7ea9
syscall2() at syscall2+0x370 0xffffffff80470f61
Xfast_syscall() at Xfast_syscall+0xcb 0xffffffff8045a84b
(null)() at 0x10000000001 0x10000000001

Fatal trap 12: page fault while in kernel mode
cpuid = 1; lapic->id = 02000000
fault virtual address = 0x100000008
fault code = supervisor read data, page not present
instruction pointer = 0x8:0xffffffff8046a719
stack pointer = 0x10:0xffffffe12c307600
frame pointer = 0x10:0xffffffe12c307618
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 39450
current thread = pri 6
trap number = 12
panSiEc:C OpNaDAgRYe fPaAuNIlC tO
N cCpuiPdU =3 T1HR
EAbD o0xoft()f fcfaflfel0e6d1 0o81n0 fc0pu
#1
Uptime: 16d6h21m36s
Physical memory: 8135 MB
Dumping 4031 MB: 4016SECONDARY PANIC ON CPU 0 THREAD 0xffffffe060e894f0
...

(kgdb) bt
#0 _get_mycpu () at ./machine/thread.h:69
#1 md_dumpsys (di=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/dump_machdep.c:263
#2 0xffffffff802bd6c5 in dumpsys () at /usr/src/sys/kern/kern_shutdown.c:925
#3 0xffffffff802bdd2b in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:387
#4 0xffffffff802bdfe2 in panic (fmt=0xffffffff804b7acb "%s") at /usr/src/sys/kern/kern_shutdown.c:831
#5 0xffffffff8046ffb4 in trap_fatal (frame=0xffffffe12c307548, eva=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/trap.c:1035
#6 0xffffffff804701c1 in trap_pfault (frame=0xffffffe12c307548, usermode=0) at /usr/src/sys/platform/pc64/x86_64/trap.c:929
#7 0xffffffff80470782 in trap (frame=0xffffffe12c307548) at /usr/src/sys/platform/pc64/x86_64/trap.c:631
#8 0xffffffff8045a5ff in calltrap () at /usr/src/sys/platform/pc64/x86_64/exception.S:188
#9 0xffffffff8046a719 in db_read_bytes (addr=4294967304, size=8, data=0xffffffe12c307628 "") at /usr/src/sys/platform/pc64/x86_64/db_interface.c:244
#10 0xffffffff8025ef55 in db_get_value (addr=4294967304, size=8, is_signed=0) at /usr/src/sys/ddb/db_access.c:58
#11 0xffffffff8046b3b5 in db_nextframe (ip=<optimized out>, fp=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/db_trace.c:234
#12 db_stack_trace_cmd (addr=<optimized out>, have_addr=<optimized out>, count=<optimized out>, modif=<optimized out>)
at /usr/src/sys/platform/pc64/x86_64/db_trace.c:440
#13 0xffffffff8046b577 in print_backtrace (count=741373480) at /usr/src/sys/platform/pc64/x86_64/db_trace.c:452
#14 0xffffffff802bdfad in panic (fmt=0xffffffff804b7acb "%s") at /usr/src/sys/kern/kern_shutdown.c:820
#15 0xffffffff8046ffb4 in trap_fatal (frame=0xffffffe12c3078b8, eva=<optimized out>) at /usr/src/sys/platform/pc64/x86_64/trap.c:1035
#16 0xffffffff804708be in trap (frame=0xffffffe12c3078b8) at /usr/src/sys/platform/pc64/x86_64/trap.c:768
#17 0xffffffff8045a5ff in calltrap () at /usr/src/sys/platform/pc64/x86_64/exception.S:188
#18 0xffffffff802d79ab in sysctl_root (oidp=<optimized out>, arg1=0xffffffe12c307ae8, arg2=5, req=0xffffffe12c3079e8) at /usr/src/sys/kern/kern_sysctl.c:1201
#19 0xffffffff802d7b34 in userland_sysctl (name=0xffffffe12c307ae8, namelen=5, old=<optimized out>, oldlenp=0x0, inkernel=<optimized out>, new=0x0, newlen=0,
retval=0xffffffe12c307ae0) at /usr/src/sys/kern/kern_sysctl.c:1283
#20 0xffffffff802d7ea9 in sys___sysctl (uap=0xffffffe12c307b68) at /usr/src/sys/kern/kern_sysctl.c:1223
#21 0xffffffff80470f61 in syscall2 (frame=0xffffffe12c307c18) at /usr/src/sys/platform/pc64/x86_64/trap.c:1248
#22 0xffffffff8045a84b in Xfast_syscall () at /usr/src/sys/platform/pc64/x86_64/exception.S:323
#23 0x000000000000002b in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

I did get a good dump, so it's available on ylem in /var/crash. Let me
know if you need access to view it.

--Peter

noname (199 Bytes) pavalos, 01/21/2012 11:14 AM


Related issues

Related to Bug #2402: Showstopper panics for Release 3.2 New 08/15/2012
Blocks Bug #2286: 3.0 release catchall ticket Closed 01/22/2012

History

#1 Updated by vsrinivas about 2 years ago

If you get a chance, can you upload the core/kernel to leaf so we can take a look at it?

Thanks!

#2 Updated by alexh about 2 years ago

I have access to ylem, but not to its /var/crash.

Uploading it to leaf would certainly be nice, but I'd settle for read access on /var/crash on ylem.

Cheers,
Alex

#3 Updated by pavalos over 1 year ago

  • Status changed from New to Closed

Well I don't know what happened to that vmcore, but I can't find it any more. Closing.

Also available in: Atom PDF