Project

General

Profile

Actions

Bug #227

closed

vinum panic on -devel

Added by corecode almost 18 years ago. Updated over 17 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

hey,

as rumko already mentioned on the irc channel (but failed to post here),
there is a panic in vinum:

reproduce: create new vinum volume, then try to newfs -v it.

gives this backtrace:

#12 0xc016496e in dev_dstrategy (dev=0xcbecdec0, bio=0xd2280368)
at /usr/build/src/sys/kern/kern_device.c:214
#13 0xd61ce07d in launch_requests (rq=0x1, reviveok=0)
at /usr/build/src/sys/dev/raid/vinum/vinumrequest.c:437
#14 0xd61cdeed in vinumstart (dev=0xcbed04f0, bio=0xc249e868, reviveok=0)
at /usr/build/src/sys/dev/raid/vinum/vinumrequest.c:305
#15 0xd61cdc6f in vinumstrategy (dev=0xcbed04f0, bio=0xc249e830)
at /usr/build/src/sys/dev/raid/vinum/vinumrequest.c:174
#16 0xc016475b in cdevsw_putport (port=0xc0310940, lmsg=0xd61e2a20)
at /usr/build/src/sys/kern/kern_device.c:98
#17 0xc017f97e in lwkt_domsg (port=0xcbecdec0, msg=0xd61e2a20) at
msgport2.h:92
#18 0xc0164a0c in dev_dstrategy (dev=0xcbed04f0, bio=0xc249e830)
at /usr/build/src/sys/kern/kern_device.c:225
#19 0xc0171b62 in physio (dev=0xcbed04f0, uio=0xd61e2c98, ioflag=1)
at /usr/build/src/sys/kern/kern_physio.c:109
#20 0xc01647f8 in cdevsw_putport (port=0xc0310940, lmsg=0xd61e2af4)
at /usr/build/src/sys/kern/kern_device.c:127
#21 0xc017f97e in lwkt_domsg (port=0xcbecdec0, msg=0xd61e2af4) at
msgport2.h:92
#22 0xc0164c8f in dev_dwrite (dev=0xd2280338, uio=0x0, ioflag=0)
at /usr/build/src/sys/kern/kern_device.c:322
#23 0xc01cd967 in spec_write (ap=0xd61e2b98) at
/usr/build/src/sys/vfs/specfs/spec_vnops.c:356
#24 0xc0237f06 in ufsspec_write (ap=0xd61e2b98) at
/usr/build/src/sys/vfs/ufs/ufs_vnops.c:1920
#25 0xc023865e in ufs_vnoperatespec (ap=0xcbecdec0) at
/usr/build/src/sys/vfs/ufs/ufs_vnops.c:2480
#26 0xc01cb0f9 in vop_write (ops=0xcbecdec0, vp=0xd5cc4800, uio=0x0,
ioflag=0, cred=0x0)
at /usr/build/src/sys/kern/vfs_vopops.c:367
#27 0xc01ca789 in vn_write (fp=0xc23893c0, uio=0xd61e2c98, cred=0x0,
flags=0)
at /usr/build/src/sys/kern/vfs_vnops.c:644
#28 0xc01937ac in dofilewrite (fd=0, fp=0xc23893c0, auio=0xd61e2c98,
flags=0, res=0x0)
at file2.h:72
#29 0xc01936c3 in kern_pwritev (fd=3, auio=0x0, flags=0, res=0x0)
at /usr/build/src/sys/kern/sys_generic.c:449
#30 0xc01933dc in sys_write (uap=0xd61e2cf4) at
/usr/build/src/sys/kern/sys_generic.c:325

(kgdb) fra 12
#12 0xc016496e in dev_dstrategy (dev=0xcbecdec0, bio=0xd2280368)
at /usr/build/src/sys/kern/kern_device.c:214
214 KKASSERT;
(kgdb) p *bio
$1 = {bio_act = {tqe_next = 0x0, tqe_prev = 0x0}, bio_track = 0x0,
bio_prev = 0x0, bio_next = 0x0,
bio_buf = 0x0, bio_done = 0xd61c8bc0 <complete_rqe>, bio_offset =
44974435328,
bio_driver_info = 0xcbecdec0, bio_caller_info1 = {ptr = 0x0, offset =
0, index = 0,
cluster_head = 0x0, cluster_parent = 0x0}, bio_caller_info2 = {ptr = 0x0, offset = 0,
index = 0, cluster_tail = 0x0}}
(kgdb) p *bio->bio_buf
Cannot access memory at address 0x0

cheers
simon

Actions #1

Updated by dillon almost 18 years ago

:hey,
:
:as rumko already mentioned on the irc channel (but failed to post here),
:there is a panic in vinum:
:
:reproduce: create new vinum volume, then try to newfs -v it.
:
:gives this backtrace:
:
:#12 0xc016496e in dev_dstrategy (dev=0xcbecdec0, bio=0xd2280368)
: at /usr/build/src/sys/kern/kern_device.c:214
:#13 0xd61ce07d in launch_requests (rq=0x1, reviveok=0)
: at /usr/build/src/sys/dev/raid/vinum/vinumrequest.c:437
:#14 0xd61cdeed in vinumstart (dev=0xcbed04f0, bio=0xc249e868, reviveok=0)
: at /usr/build/src/sys/dev/raid/vinum/vinumrequest.c:305
:#15 0xd61cdc6f in vinumstrategy (dev=0xcbed04f0, bio=0xc249e830)
: at /usr/build/src/sys/dev/raid/vinum/vinumrequest.c:174
:#16 0xc016475b in cdevsw_putport (port=0xc0310940, lmsg=0xd61e2a20)
: at /usr/build/src/sys/kern/kern_device.c:98
:#17 0xc017f97e in lwkt_domsg (port=0xcbecdec0, msg=0xd61e2a20) at

I would be amazed if vinum still worked considering all the manual
buffer cache manipulation it does.
It should be possible to get it working again, but I couldn't create 
a vinum configuration to save my life so if you provide one based on
a few simple IDE disk partitions I will try to reproduce the problem.
-Matt
Actions #2

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

It should be possible to get it working again, but I couldn't create
a vinum configuration to save my life so if you provide one based on
a few simple IDE disk partitions I will try to reproduce the problem.

sure thing:

1. use disklabel to create a partition with type "vinum", i created
/dev/ad4s1l
2. cat > vinum.conf <<EOF
drive honk device /dev/ad4s1l
volume test
plex org concat
subdisk length 0 drive honk
EOF
2. # vinum
3. vinum -> config vinum.conf
4. vinum -> makedev
5. # newfs -v /dev/vinum/test # will crash

cheers
simon

Actions #3

Updated by dillon almost 18 years ago

Ok, I think I've fixed it.

Note that vinum might still have issues, in particular because of the
recent buffer cache work that converted all the block numbers to
64 bit byte offsets. I dutifully went through all the drivers but
vinum is basically untested.
It would be great if some vinum testing could be done.
-Matt
Actions #4

Updated by joerg almost 18 years ago

On Wed, Jul 05, 2006 at 03:48:55PM -0700, Matthew Dillon wrote:

It would be great if some vinum testing could be done.

If you have some spare drives, try the following setup:

--- cut ---

drive v0 device /dev/ad0s1e
drive v1 device /dev/ad1s1e
drive v2 device /dev/ad2s1e
drive v3 device /dev/ad3s1e

volume test1
plex org striped 256s
sd drive v0 len 5g
sd drive v1 len 5g
plex org striped 256s
sd drive v2 len 5g
sd drive v3 len 5g

volume test2
plex org concat
sd drive v0 len 5g
sd drive v1 len 5g

volume test3
plex org raid5 256s
sd drive v0 len 5g
sd drive v1 len 5g
sd drive v2 len 5g
sd drive v3 len 5g

--- cut ---

Which should cover the more important code pathes.
A good test pattern could be writing the block number into each disk
block and reading them back. Sorry, I don't have the resources right now
:-)

Joerg

Actions #5

Updated by corecode almost 18 years ago

[snip]

using this config, the next one:
Unread portion of the kernel message buffer:
<6>vinum: test3.p0.s1 is initializing by force
<6>vinum: test3.p0 is initializing
<6>vinum: test3.p0.s0 is initializing by force
panic: assertion: bio->bio_buf->b_cmd != BUF_CMD_DONE in dev_dstrategy

#3 0xc0164993 in dev_dstrategy (dev=0xcbecdec0, bio=0xd229ac88)
at /usr/build/src/sys/kern/kern_device.c:214
#4 0xd5bffd4f in sdio (bio=0xc24f9430) at
/usr/build/src/sys/dev/raid/vinum/vinumrequest.c:991
#5 0xd5c00e99 in initsd (sdno=6, verify=0) at
/usr/build/src/sys/dev/raid/vinum/vinumrevive.c:560
#6 0xd5c01fa3 in start_object (data=0xd5bc4400)
at /usr/build/src/sys/dev/raid/vinum/vinumstate.c:885
#7 0xd5c02410 in setstate (msg=0xd5bc4400) at
/usr/build/src/sys/dev/raid/vinum/vinumstate.c:1057
#8 0xd5bfcbb8 in vinumioctl (dev=0xcbed03b8, cmd=3238020684,
data=0xd5bc4400 "\006", flag=3,
td=0xd5627900) at /usr/build/src/sys/dev/raid/vinum/vinumioctl.c:215

i think the fix is

diff r 8d0206a990f4 sys/dev/raid/vinum/vinumrequest.c
--
a/sys/dev/raid/vinum/vinumrequest.c Thu Jul 06 13:16:25 2006 0200
++ b/sys/dev/raid/vinum/vinumrequest.c Thu Jul 06 15:22:58 2006 0200
@ -947,6 +947,7 @ sdio(struct bio bio)
sddev = DRIVE[sd->driveno].dev; /
device /
bzero(sbp, sizeof(struct sdbuf)); /
start with nothing /
sbp->b.b_flags = bp->b_flags | B_PAGING;
sbp->b.b_cmd = bp->b_cmd;
sbp->b.b_bcount = bp->b_bcount; /
number of bytes to transfer /
sbp->b.b_resid = bp->b_resid; /
and amount waiting /
sbp->b.b_data = bp->b_data; /
data buffer */

will test this in a few minutes.

cheers
simon

Actions #6

Updated by dillon almost 18 years ago

:#4 0xd5bffd4f in sdio (bio=0xc24f9430) at
:/usr/build/src/sys/dev/raid/vinum/vinumrequest.c:991
:
:i think the fix is
:
: sbp->b.b_flags = bp->b_flags | B_PAGING;
:+ sbp->b.b_cmd = bp->b_cmd;
: sbp->b.b_bcount = bp->b_bcount; /* number of bytes to transfer */
:
:cheers
: simon
:
:--
:Serve - BSD ++ RENT this banner advert ++ ASCII Ribbon /"\

That looks like the right fix to me too, go ahead and commit it as
soon as you test it.
-Matt
Matthew Dillon
&lt;&gt;
Actions #7

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

That looks like the right fix to me too, go ahead and commit it as
soon as you test it.

alright. panic()ing my way through vinum, i now reached:

panic: brelse: inappropriate B_PAGING or B_CLUSTER bp 0xc2622a00

#3 0xc01afec5 in brelse (bp=0xc2622a00) at /usr/build/src/sys/kern/vfs_bio.c:971
#4 0xd5731594 in revive_block (sdno=2) at /usr/build/src/sys/dev/raid/vinum/vinumrevive.c:225

well, of course B_PAGING is set, because revive_block() sets it. what's the correct way to release this buf? should i simply unset B_PAGING?

cheers
simon

Actions #8

Updated by dillon almost 18 years ago

:
:Matthew Dillon wrote:
:> That looks like the right fix to me too, go ahead and commit it as
:> soon as you test it.
:
:alright. panic()ing my way through vinum, i now reached:
:
:panic: brelse: inappropriate B_PAGING or B_CLUSTER bp 0xc2622a00
:
:#3 0xc01afec5 in brelse (bp=0xc2622a00) at /usr/build/src/sys/kern/vfs_bio.c:971
:#4 0xd5731594 in revive_block (sdno=2) at /usr/build/src/sys/dev/raid/vinum/vinumrevive.c:225
:
:well, of course B_PAGING is set, because revive_block() sets it. what's the correct way to release this buf? should i simply unset B_PAGING?
:
:cheers
: simon

B_PAGING should not be set on getblk()'d buffers so simply do not
set B_PAGING. In fact, vinum shouldn't even be setting the flags
unconditionally like it is right there.
Remove this entirely:
bp->b_flags = B_PAGING;
Change:
bp->b_flags = B_ORDERED | B_PAGING;
To:
bp->b_flags |= B_ORDERED;
This happens in a couple of places in that file.
That should work.  Really the correct way is probably to use getpbuf()
instead of geteblk(), but I'd like to make as few functional changes
as possible to get vinum working again.
-Matt
Matthew Dillon
&lt;&gt;
Actions #9

Updated by corecode almost 18 years ago

Matthew Dillon wrote:

That should work. Really the correct way is probably to use getpbuf()
instead of geteblk(), but I'd like to make as few functional changes
as possible to get vinum working again.

Of course I tried to find out how the "really correct way" should look like. Now I don't get any panics and it seems that it might work, but some completely different, but seemingly related (happened several times, not neccessarily directly coupled) panic occured:

dev = #ad/0x20003, block = 11896, fs = /var
panic: ffs_blkfree: freeing free block

backtrace I can't because:

(kgdb) bt
#0 0x00000000 in ?? ()

  • 39 Thread 0xce2ec500 (PID=-2: syncer) 0x00000000 in ?? ()

[something is wrong with kgdb here, I will look into this later]

/var was fsck -f'ed directly before. This time this happened when I start'ed vinum (running following patch).

What I wonder is: is it my patch which is fucking up stuff, is it vinum doing wrong offset calculations (and thus overwriting something in /var) or is it "just" a bug in ffs?

thanks
simon

diff r b16b8dd8a0d1 sys/dev/raid/vinum/vinum.c
--
a/sys/dev/raid/vinum/vinum.c Fri Jul 07 15:03:06 2006 0200
++ b/sys/dev/raid/vinum/vinum.c Fri Jul 07 15:04:51 2006 +0200
@ -97,6 +97,8 @ vinumattach(void *dummy)
dqend = NULL;

cdevsw_add(&vinum_cdevsw, 0, 0);                /* add the cdevsw entry /

vinum_conf.physbufs = nswbuf / 2 + 1; /
maximum amount of physical bufs */
/* allocate space: drives... /
DRIVE = (struct drive *) Malloc(sizeof(struct drive) * INITIAL_DRIVES);
diff r b16b8dd8a0d1 sys/dev/raid/vinum/vinumrevive.c
--
a/sys/dev/raid/vinum/vinumrevive.c Fri Jul 07 15:03:06 2006 0200
++ b/sys/dev/raid/vinum/vinumrevive.c Fri Jul 07 15:46:07 2006 0200
@ -140,11 +140,10 @ revive_block(int sdno)
if (bp == NULL) /
no buffer space /
return ENOMEM; /
chicken out /
} else { /
data block /
- crit_enter();
- bp = geteblk(size); /
Get a buffer /
- crit_exit();
bp = trypbuf(&vinum_conf.physbufs); /
Get a buffer */
if (bp == NULL)
return ENOMEM;
+ bp->b_data = Malloc(size);
/*
 * Amount to transfer: block size, unless it
@ -164,7 +163,6 @ revive_block(int sdno)
else /* it's an unattached plex /
dev = VINUM_PLEX(sd->plexno); /
create the device number */

- bp->b_flags = B_PAGING; /* either way, read it /
bp->b_cmd = BUF_CMD_READ;
vinumstart(dev, &bp->b_bio1, 1);
biowait(bp);
@ -176,7 +174,7 @ revive_block(int sdno)
/
Now write to the subdisk */ {
dev = VINUM_SD(sdno); /* create the device number /
- bp->b_flags = B_ORDERED | B_PAGING; /
and make this an ordered write /
+ bp->b_flags |= B_ORDERED; /
and make this an ordered write /
bp->b_cmd = BUF_CMD_WRITE;
bp->b_resid = bp->b_bcount;
bp->b_bio1.bio_offset = (off_t)sd->revived << DEV_BSHIFT; /
write it to here /
@ -219,11 +217,8 @ revive_block(int sdno)
sd->waitlist = sd->waitlist->next; /
and move on to the next /
}
}
- if (bp->b_qindex == 0) { /
not on a queue, /
- bp->b_flags |= B_INVAL;
- bp->b_flags &= ~B_ERROR;
- brelse(bp); /
is this kosher? */
- }
+ Free(bp->b_data);
+ relpbuf(bp, &vinum_conf.physbufs);
return error;
}

@ -321,9 +316,8 @ parityops(struct vinum_ioctl_msg *data)
}
if (pbp->b_flags & B_ERROR)
reply->error = pbp->b_error;
- pbp->b_flags |= B_INVAL;
- pbp->b_flags &= ~B_ERROR;
- brelse(pbp);
+ Free(pbp->b_data);
+ relpbuf(pbp, &vinum_conf.physbufs);
unlockrange(plexno, lock);
}

@ -397,18 +391,16 @ parityrebuild(struct plex plex,
for (sdno = 0; sdno < bufcount; sdno++) { /
for each subdisk /
if ((sdno != psd) || (op != rebuildparity)) {
/
Get a buffer header and initialize it. /
- crit_enter();
- bpp[sdno] = geteblk(mysize); /
Get a buffer /
+ bpp[sdno] = trypbuf(&vinum_conf.physbufs); /
Get a buffer /
if (bpp[sdno] == NULL) {
while (sdno-- > 0) { /
release the ones we got /
- bpp[sdno]->b_flags |= B_INVAL;
- brelse(bpp[sdno]); /
give back our resources /
+ Free(bpp[sdno]->b_data);
+ relpbuf(bpp[sdno], &vinum_conf.physbufs); /
give back our resources /
}
- crit_exit();
printf("vinum: can't allocate buffer space for parity op.\n");
return NULL; /
no bpps /
}
- crit_exit();
+ bpp[sdno]->b_data = Malloc(mysize);
if (sdno == psd)
parity_buf = (int *) bpp[sdno]->b_data;
if (sdno == newpsd) /
the new one? /
@ -416,7 +408,6 @ parityrebuild(struct plex *plex,
else
bpp[sdno]->b_bio1.bio_driver_info = VINUM_SD(plex->sdnos[sdno]); /
device number /
bpp[sdno]->b_cmd = BUF_CMD_READ; /
either way, read it /
- bpp[sdno]->b_flags = B_PAGING;
bpp[sdno]->b_bcount = mysize;
bpp[sdno]->b_resid = bpp[sdno]->b_bcount;
bpp[sdno]->b_bio1.bio_offset = (off_t)pstripe << DEV_BSHIFT; /
transfer from here /
@ -468,8 +459,8 @ parityrebuild(struct plex *plex,
}
}
if (sdno != psd) { /
release all bps except parity /
- bpp[sdno]->b_flags |= B_INVAL;
- brelse(bpp[sdno]); /
give back our resources /
+ Free(bpp[sdno]->b_data);
+ relpbuf(bpp[sdno], &vinum_conf.physbufs); /
give back our resources */
}
}

@ -489,8 +480,8 @ parityrebuild(struct plex plex,
break;
}
}
- bpp[psd]->b_flags |= B_INVAL;
- brelse(bpp[psd]); /
give back our resources /
+ Free(bpp[psd]->b_data);
+ relpbuf(bpp[psd], &vinum_conf.physbufs); /
give back our resources /
}
/
release our resources */
Free(bpp);
@ -543,14 +534,13 @ initsd(int sdno, int verify)

size = min(sd->init_blocksize >> DEV_BSHIFT, sd->sectors - sd->initialized) << DEV_BSHIFT;

+ bp = trypbuf(&vinum_conf.physbufs); /* Get a buffer /
+ if (bp == NULL)
+ return ENOMEM;
+ bp->b_data = Malloc(size);

verified = 0;
while (!verified) { /
until we're happy with it, /
- crit_enter();
- bp = geteblk(size); /
Get a buffer /
- crit_exit();
- if (bp == NULL)
- return ENOMEM;

bp
>b_bcount = size;
bp->b_resid = bp->b_bcount;
bp->b_bio1.bio_offset = (off_t)sd->initialized << DEV_BSHIFT; /
write it to here /
@ -561,49 +551,33 @ initsd(int sdno, int verify)
biowait(bp);
if (bp->b_flags & B_ERROR)
error = bp->b_error;
- if (bp->b_qindex == 0) { /
not on a queue, /
- bp->b_flags |= B_INVAL;
- bp->b_flags &= ~B_ERROR;
- brelse(bp); /
is this kosher? /
- }
if ((error == 0) && verify) { /
check that it got there /
- crit_enter();
- bp = geteblk(size); /
get a buffer /
- if (bp == NULL) {
- crit_exit();
- error = ENOMEM;
- } else {
- bp->b_bcount = size;
- bp->b_resid = bp->b_bcount;
- bp->b_bio1.bio_offset = (off_t)sd->initialized << DEV_BSHIFT; /
read from here /
- bp->b_bio1.bio_driver_info = VINUM_SD(sdno);
- bp->b_cmd = BUF_CMD_READ; /
read it back /
- crit_exit();
- sdio(&bp->b_bio1);
- biowait(bp);
- /

- * XXX Bug fix code. This is hopefully no
- * longer needed (21 February 2000).
- /
- if (bp->b_flags & B_ERROR)
- error = bp->b_error;
- else if ((*bp->b_data != 0) /
first word spammed /
- ||(bcmp(bp->b_data, &bp->b_data1, bp->b_bcount - 1))) { /
or one of the others /
- printf("vinum: init error on s, offset 0x%llx sectors\n",
- sd->name,
- (long long) sd->initialized);
- verified = 0;
- } else
- verified = 1;
- if (bp->b_qindex == 0) { /
not on a queue, /
- bp->b_flags |= B_INVAL;
- bp->b_flags x%x= ~B_ERROR;
- brelse(bp); /
is this kosher? /
- }
- }
bp->b_bcount = size;
+ bp->b_resid = bp->b_bcount;
+ bp->b_bio1.bio_offset = (off_t)sd->initialized << DEV_BSHIFT; /
read from here /
+ bp->b_bio1.bio_driver_info = VINUM_SD(sdno);
+ bp->b_cmd = BUF_CMD_READ; /
read it back /
+ sdio(&bp->b_bio1);
+ biowait(bp);
+ /

+ * XXX Bug fix code. This is hopefully no
+ * longer needed (21 February 2000).
+ /
+ if (bp->b_flags & B_ERROR)
+ error = bp->b_error;
+ else if ((*bp->b_data != 0) /
first word spammed /
+ ||(bcmp(bp->b_data, &bp->b_data1, bp->b_bcount - 1))) { /
or one of the others /
+ printf("vinum: init error on %s, offset 0x%llx sectors\n",
+ sd->name,
+ (long long) sd->initialized);
+ verified = 0;
+ } else
+ verified = 1;
} else
verified = 1;
}
+ Free(bp->b_data);
+ relpbuf(bp, &vinum_conf.physbufs);
if (error == 0) { /
did it, /
sd->initialized = size >> DEV_BSHIFT; /
moved this much further down /
if (sd->initialized >= sd->sectors) { /
finished */
diff r b16b8dd8a0d1 sys/dev/raid/vinum/vinumvar.h
--
a/sys/dev/raid/vinum/vinumvar.h Fri Jul 07 15:03:06 2006 +0200
++ b/sys/dev/raid/vinum/vinumvar.h Fri Jul 07 15:04:51 2006 0200
@ -313,6 +313,7 @ struct _vinum_conf {
struct request *lastrq;
struct bio *lastbio;
#endif
int physbufs;
};

/* Use these defines to simplify code */
Actions #10

Updated by dillon almost 18 years ago

:Of course I tried to find out how the "really correct way" should look like. Now I don't get any panics and it seems that it might work, but some completely different, but seemingly related (happened several times, not neccessarily directly coupled) panic occured:

Well, if you are going to use getpbuf() you have to be absolutely sure
that b_bcount and b_resid are set properly before any I/O. geteblk()
sets those fields to the passed size, getpbuf() sets them to the
pbuffer which is MAXBSIZE.
I recommend that we stick with geteblk() for now.

:dev = #ad/0x20003, block = 11896, fs = /var
:panic: ffs_blkfree: freeing free block
:
:backtrace I can't because:

It's possible that you are overwriting something but I will note that 
ffs_blkfree panics have been reported by others. I am guessing that it
is a softupdates bug of some sort. I've put tons of assertions code in
UFS to try to catch the blkfree panic earlier with no success so my
guess is that it is not corruption per-say but instead softupdates
reusing a block which has a pending free associated with it, then later
writing out the free state while the block is still in use.
-Matt
Actions

Also available in: Atom PDF