Project

General

Profile

Actions

Bug #2756

closed

Hit kernel panic while running hammer show cmd

Added by tkusumi about 9 years ago. Updated about 9 years ago.

Status:
Closed
Priority:
High
Assignee:
Category:
Kernel
Target version:
Start date:
01/02/2015
Due date:
% Done:

0%

Estimated time:

Description

Hit kernel panic while running hammer show cmd. All I did was

  1. uname -r
    4.0-RELEASE
  2. hammer -f /dev/serno/xxxxxxxx.s1d show > show.out

where /dev/serno/xxxxxxxx.s1d is a volume for / hammerfs with enough space left. It's running as a virtualbox guest on x86_64. It happens whenever the size of show.out gets around 250MB.

  1. df -h
    Filesystem Size Used Avail Capacity Mounted on
    ROOT 74G 8.2G 66G 11% /
    ...

x/i says it died at movl at dscheck+0x8b (ffffffff80618025)
ffffffff80618025: 44 8b 7b 0c mov 0xc(%rbx),%r15d
ffffffff80618029: 44 3b 7d b8 cmp -0x48(%rbp),%r15d
ffffffff8061802d: 77 28 ja ffffffff80618057 <dscheck+0xbd>

dscheck() was called as a sequence of btree lookup by hammer show. hammer_vop_strategy_read() -> hammer_ip_first() -> hammer_btree_lookup() -> btree_search() -> hammer_cursor_down() -> hammer_get_node() -> hammer_load_node() -> hammer_get_buffer() -> hammer_load_buffer() -> hammer_io_read() -> hammer_cluster_read() -> ... (failed to catch any further)

I saw disas of /boot/kernel/kernel and this movl seems to be null pointer dereference of *ssp at
if (slice >= ssp->dss_nslices)
of the following.

struct bio *
dscheck(cdev_t dev, struct bio *bio, struct diskslices *ssp) {
struct buf *bp = bio->bio_buf;
struct bio *nbio;
disklabel_t lp;
disklabel_ops_t ops;
long nsec;
u_int64_t secno;
u_int64_t endsecno;
u_int64_t slicerel_secno;
struct diskslice *sp;
u_int32_t part;
u_int32_t slice;
int shift;
int mask;

slice = dkslice(dev);
part = dkpart(dev);

if (bio->bio_offset < 0) {
kprintf("dscheck(%s): negative bio_offset %lld\n",
devtoname(dev), (long long)bio->bio_offset);
goto bad;
}
if (slice >= ssp->dss_nslices) {
kprintf("dscheck(%s): slice too large %d/%d\n",
devtoname(dev), slice, ssp->dss_nslices);
goto bad;
}

Actions #1

Updated by tuxillo about 9 years ago

Hi,

I can't reproduce it:

1) In a vkernel (though it runs DragonFly 4.1) with a normal and a full filesystem:

vk01# df -h
Filesystem Size Used Avail Capacity Mounted on
VKROOT 19G 3.6G 16G 19% /
devfs 1.0K 1.0K 0B 100% /dev
TEST 3.2G 3.2G -7.8M 100% /mnt
vk01# hammer -f /dev/vkd0s1a show > /dev/null
vk01# hammer -f /dev/vkd1s1d show > /dev/null
vk01#

2) In a VMWare VM (DragonFly 4.0) with a bigger filesystem than yours:

nas# df -h /
Filesystem Size Used Avail Capacity Mounted on
ROOT 545G 417G 127G 77% /
nas# hammer -f /dev/da0s1d show > /dev/null
nas#

Do you have any messages in dmesg? Which disk driver are you using in VBox?

Cheers,
Antonio Huete

Actions #2

Updated by tkusumi about 9 years ago

hi, thanks for the reply.

xml config file says as follows. I haven't really changed virtualbox hw setting from what was already there, but it's IDE, not SATA.

<StorageControllers>
<StorageController name="IDE" type="PIIX4" PortCount="2" useHostIOCache="true" Bootable="true">
<AttachedDevice type="HardDisk" hotpluggable="true" port="0" device="0">

ATA probe from dmesg.

atapci0: <Intel PIIX4 UDMA33 controller> port 0xd000-0xd00f,0x376,0x170-0x177,0x3f6,0x1f0-0x1f7 at device 1.1 on pci0
ata0: <ATA channel 0> on atapci0
disk scheduler: set policy of ad0 to noop
ad0: 81920MB <VBOX HARDDISK 1.0> at ata0-master UDMA33

Actions #3

Updated by tkusumi about 9 years ago

Tried again and was able to reproduce it. For some reason it hits panic when show.out has reached around 240-250MB.

dmesg from console right before it enters db>.

ad0 FAILURE - device detached
subdisk0: detached
ad0: detached

It was at a different function this time though. x/i says it died at
getmicrouptime+0x15

where asm of +0x15=ffffffff805c3c86 is like this.
getmicrouptime+0x15 ()
ffffffff805c3c71 <getmicrouptime>:
ffffffff805c3c71: 55 push %rbp
ffffffff805c3c72: 48 89 e5 mov %rsp,%rbp
ffffffff805c3c75: 65 48 8b 0c 25 00 00 mov %gs:0x0,%rcx
ffffffff805c3c7c: 00 00
ffffffff805c3c7e: 8b b1 28 24 00 00 mov 0x2428(%rcx),%esi
ffffffff805c3c84: 89 f6 mov %esi,%esi
ffffffff805c3c86: 48 89 37 mov %rsi,(%rdi)

with following bt
getmicrouptime()
devstat_end_transaction()
devstat_end_transaction_buf()
ad_done()
ata_completed()
taskqueue_run()
taskqueue_swi_mp_run()
ithread_handler()

timeval *tvp (%rdi) was null and it died at
mov %rsi,(%rdi)
in
tvp->tv_sec = gd->gd_time_seconds;
of the following.

void
getmicrouptime(struct timeval *tvp) {
struct globaldata *gd = mycpu;
sysclock_t delta;

do {
tvp->tv_sec = gd->gd_time_seconds;
delta = gd->gd_hardclock.time - gd->gd_cpuclock_base;
} while (tvp->tv_sec != gd->gd_time_seconds);

not sure if this is really a hammer issue. worth trying with ufs ?

Actions #4

Updated by tuxillo about 9 years ago

  • Assignee set to tuxillo

Hi,

There was some work in master that didn't go to release because the callout API changed and NATA locks were adapted as well.

commit 15bd3c7353c5ce02776849ca16be00d5088d8734
Author: Matthew Dillon <>
Date: Tue Nov 25 14:11:42 2014 -0800

kernel - Fix boot-time panic in NATA revealed by new callout mechanics

Can you please try to reproduce it with latest master?

- Download latest ISO here: http://avalon.dragonflybsd.org/snapshots/x86_64/DragonFly-x86_64-LATEST-ISO.iso.bz2
- Boot your VM with it
- Mount your HAMMER fs from the LiveCD
- Run your show command redirecting the output to a file like you did before.

Let me know the results.

Cheers,
Antonio Huete

Actions #5

Updated by tkusumi about 9 years ago

Thanks for the latest iso. I guess 15bd3c7353c5ce02776849ca16be00d5088d8734 is it. I couldn't reproduce it with this binary.

This is what I did. I brought some bsd and linux kernel repositories/archives which is good to fill btree with bunch of inodes, dirents, etc. Without doing this output of hammer show (size of btree) isn't large enough to reproduce it.

  1. uname -r
    4.1-DEVELOPMENT
  1. git clone git://git.dragonflybsd.org/dragonfly.git
  2. git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
  3. git clone https://github.com/freebsd/freebsd.git
  4. wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.18.tar.gz
  5. wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.17.tar.gz
  6. wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.tar.gz
  7. wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.15.tar.gz
  8. tar xzf linux-3.18.tar.gz
  9. tar xzf linux-3.17.tar.gz
  10. tar xzf linux-3.16.tar.gz
  11. tar xzf linux-3.15.tar.gz
  1. df -Th | head -2
    Filesystem Type Size Used Avail Capacity Mounted on
    ROOT hammer 74G 9.7G 65G 13% /

After filling btree with above files, tried it again. It didn't reproduce while it reproduced 100% in my 4.0 environment.

  1. hammer -f `hammer volume-list /` show > show.out
  2. ls lh show.out
    -rw-r--r-
    1 root wheel 389M Jan 5 10:54 show.out
  3. grep ELM ./show.out | wc -l
    1412014
Actions #6

Updated by tkusumi about 9 years ago

btw, even in 4.0 environment it doesn't reproduce with dd. It easily gets larger than where it got stuck with hammer show. I guess it's hammer show's whole btree iteration that happened to hit ata driver problem that was fixed in 15bd3c7353c5ce02776849ca16be00d5088d8734.

  1. dd if=/dev/urandom of=./dd.out
    ^C2289278+0 records in
    2289277+0 records out
    1172109824 bytes transferred in 19.354388 secs (60560418 bytes/sec)
  2. ls lh ./dd.out
    -rw-r--r-
    1 root wheel 1.1G Jan 5 02:19 ./dd.out
Actions #7

Updated by tkusumi about 9 years ago

Actually it did panic with dd as well on 4.0. I found it in kernel panic a while after I've successfully done dd and sync. Don't know who's i/o this was but it died at

dsclose+0x3b (ffffffff806183ec)
where asm of +0x3b is
ffffffff806183ec: 45 39 65 0c cmp %r12d,0xc(%r13)

again it's null dereference of *ssp(=%r13) in dsclose().

void
dsclose(cdev_t dev, int mode, struct diskslices *ssp) {
u_int32_t part;
u_int32_t slice;
struct diskslice *sp;

slice = dkslice(dev);
part = dkpart(dev);
if (slice < ssp->dss_nslices) { //here

It was in hammer's io_complete callback. hammer_io_complete() -> hammer_critical_error() -> hammer_vol_rb_tree_RB_SCAN() -> hammer_adjust_volume_mode() -> vop_close() -> ...

I assume it's the same issue that was fixed by 15bd3c7353c5ce02776849ca16be00d5088d8734.

Actions #8

Updated by tuxillo about 9 years ago

  • Status changed from New to Closed
Actions #9

Updated by tuxillo about 9 years ago

  • Target version changed from 4.0 to 4.2
Actions #10

Updated by tkusumi about 9 years ago

If anyone hits this kernel panic using the latest version of VirtualBox on Linux, try using KVM. I don't hit this issue when running 4.0.3 (which hasn't merged 15bd3c73 yet) as a KVM guest while I do hit this everyday on VirtualBox.

The easiest way to reproduce this kernel panic on VirtualBox is to run hammer show. It usually requires the fs to have some GB, and cloning some kernel repositories is a good way to fill a newfsed btree with bunch of nodes. If the hammer show completes then that environment probably isn't likely to hit the issue. This bug is ATA issue so you do see this on UFS as well.

If you continue using that environment by rebooting it will eventually get the fs to the point where you can no longer execute hammer cleanup nor write some MB files which is dead end.

hammer show on KVM guest (see above posts for VirtualBox)
  1. git clone git://git.dragonflybsd.org/dragonfly.git
  2. git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
  3. hammer volume-list /
    /dev/serno/QM00001.s1d
  4. hammer -f /dev/serno/QM00001.s1d show > out
  5. ls lh out
    -rw-r--r-
    1 root wheel 319M Feb 8 13:57 out
Actions

Also available in: Atom PDF