Project

General

Profile

Actions

Bug #3096

closed

VM bug in pmap code.

Added by arcade@b1t.name about 7 years ago. Updated over 3 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
VM subsystem
Target version:
Start date:
11/02/2017
Due date:
% Done:

0%

Estimated time:

Description

Just got core dump. Code is from git, just before compatibility removal.


Files

core.txt.48 (199 KB) core.txt.48 arcade@b1t.name, 11/02/2017 02:59 AM
core.txt.49 (197 KB) core.txt.49 arcade@b1t.name, 11/09/2017 05:11 AM
core.txt.50 (276 KB) core.txt.50 arcade@b1t.name, 11/09/2017 05:11 AM
Actions #1

Updated by swildner about 7 years ago

Is that master or release? Can you try with latest code?

Actions #2

Updated by swildner about 7 years ago

This should be fixed by c5030460c6 in master.

Actions #3

Updated by dillon about 7 years ago

Oops, I gave swildner bad info. It isn't fixed in c503.

Make sure that machdep.pmap_mmu_optimize is turned off (it should be off by default). We've gotten sporatic reports of this particular panic and have not yet tracked down the reason for it. Nobody can reproduce it reliably which makes tracking the problem down difficult. So if you can find a reproduction case for it, we definitely want to know!

(Also tell as per swildner's first comment whether you are doing this on master or release).

-Matt

Actions #4

Updated by arcade@b1t.name about 7 years ago

I'm on master. Hadn't faced that one another time.

machdep.pmap_mmu_optimize: 0

I just remember the system was extra slow (probably due to hammer2 and low space situation) and that I was overusing swap a lot. Will try to reproduce.

Updated by arcade@b1t.name about 7 years ago

Whoah, something changed. Getting it all the time. First one was just after the reboot, second was when removing a few gigs from tmpfs.

Actions #6

Updated by arcade@b1t.name almost 7 years ago

No more cores so far. I think I found a possible trigger for it but that's a sad story. I'm an Enlightement user and I'm still using it (despite it had been removed from dports). When everything starts failing last time I traced that down to /usr/local/lib/ecore/system/upower/v-1.18/module.so. When that file was present Terminology was crashing on start almost each time (except when started before E17, still crashing but not that fast). And a core was happening when I was switching to another window. Since E17 is modular the only thing I had to do is to remove the file and everything was magically fixed!

Actions #7

Updated by arcade@b1t.name almost 7 years ago

Happened again a few times when trying to build ports, probably during package compression. Dump was too huge to fit into the swap partition. Would try a few more times - this time it's surely not bound to hammer2.

Actions #8

Updated by arcade@b1t.name about 6 years ago

I tried removing all tmpfs usage from my host and it looks like that helped. System is not crashing or hanging any more, one week without reboot is normal. Getting tmpfs back make system unstable under huge loads, when I'm trying to push something huge there.

Actions #9

Updated by liweitianux over 5 years ago

  • Status changed from New to Feedback

Hi, the VM subsystem has been significantly improved recently (also released as 5.6). Are you running the latest DragonFly BSD and do you have the same issues? Thank you.

Actions #10

Updated by arcade@b1t.name over 3 years ago

  • Status changed from Feedback to Closed

I guess I'll closed for now, VM subsystem drastically improved over time.

Actions

Also available in: Atom PDF