Project

General

Profile

Actions

Bug #1586

closed

HAMMER: you can mount_hammer a UFS that was a hammer fs before

Added by lentferj about 15 years ago. Updated over 10 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
VM subsystem
Target version:
Start date:
Due date:
% Done:

100%

Estimated time:

Description

If a partition contains a hammer fs and you newfs it to a UFS you can
afterwards still mount it as hammer fs.
You can even still run hammer info and write data on the partition
(tried with dd).

Redo the following:

atom# newfs_hammer L pgsql /dev/ad11s2d
Volume 0 DEVICE /dev/ad11s2d size 146.48GB
initialize freemap volume 0
--------------------------------------------

1 volume total size 146.48GB version 2
boot-area-size: 64.00MB
memory-log-size: 0.50GB
undo-buffer-size: 152.00MB
total-pre-allocated: 168.00MB
fsid: 98d1b558-c09e-11de-81f0-0122685cfb53

NOTE: Please remember that you may have to manually set up a
cron(8) job to prune and reblock the filesystem regularly.
By default, the system automatically runs 'hammer cleanup'
on a nightly basis. The periodic.conf(5) variable
'daily_clean_hammer_enable' can be unset to disable this.
Also see 'man hammer' and 'man HAMMER' for more information.

atom# newfs /dev/ad11s2d
/dev/ad11s2d: media size 149993.15MB
Warning: Block size and bytes per inode restrict cylinders per group to 89.
Warning: 1748 sector(s) in last cylinder unallocated
/dev/ad11s2d: 307185964 sectors in 74997 cylinders of 1 tracks, 4096 sectors
149993.1MB in 843 cyl groups (89 c/g, 178.00MB/g, 22528 i/g)
super-block backups (for fsck -b #) at:
32, 364576, 729120, 1093664, 1458208, 1822752, 2187296, 2551840,
2916384, 3280928, 3645472, 4010016, 4374560, 4739104, 5103648, 5468192,
5832736, 6197280, [...]

atom# mount_hammer /dev/ad11s2d /mnt

atom# mount
ROOT on / (hammer, local)
devfs on /dev (devfs, local)
/dev/serno/9SF12T4Y.s2a on /boot (ufs, local)
/pfs/@-1:00001 on /var (null, local)
/pfs/
@-1:00002 on /tmp (null, local)
/pfs/@-1:00003 on /usr (null, local)
/pfs/
@-1:00004 on /home (null, local)
/pfs/@-1:00005 on /usr/obj (null, local)
/pfs/
@-1:00006 on /var/crash (null, local)
/pfs/@@-1:00007 on /var/tmp (null, local)
procfs on /proc (procfs, local)
pgsql on /mnt (hammer, local)

Actions #1

Updated by wbh about 15 years ago

Jan Lentfer wrote:

If a partition contains a hammer fs and you newfs it to a UFS you can
afterwards still mount it as hammer fs.
You can even still run hammer info and write data on the partition
(tried with dd).

? 'dd' does not know or care anything about a fs.

What happens if you not only 'newfs' to UFS, but actually write to it AS a r/w
UFS mount? (e.g. - not with 'dd').

If hammer fs can 100% recover from that, there is witchcraft afoot....

;-)

Bill

Redo the following:

atom# newfs_hammer L pgsql /dev/ad11s2d
Volume 0 DEVICE /dev/ad11s2d size 146.48GB
initialize freemap volume 0
--------------------------------------------

1 volume total size 146.48GB version 2
boot-area-size: 64.00MB
memory-log-size: 0.50GB
undo-buffer-size: 152.00MB
total-pre-allocated: 168.00MB
fsid: 98d1b558-c09e-11de-81f0-0122685cfb53

NOTE: Please remember that you may have to manually set up a
cron(8) job to prune and reblock the filesystem regularly.
By default, the system automatically runs 'hammer cleanup'
on a nightly basis. The periodic.conf(5) variable
'daily_clean_hammer_enable' can be unset to disable this.
Also see 'man hammer' and 'man HAMMER' for more information.

atom# newfs /dev/ad11s2d
/dev/ad11s2d: media size 149993.15MB
Warning: Block size and bytes per inode restrict cylinders per group to 89.
Warning: 1748 sector(s) in last cylinder unallocated
/dev/ad11s2d: 307185964 sectors in 74997 cylinders of 1 tracks, 4096
sectors
149993.1MB in 843 cyl groups (89 c/g, 178.00MB/g, 22528 i/g)
super-block backups (for fsck -b #) at:
32, 364576, 729120, 1093664, 1458208, 1822752, 2187296, 2551840,
2916384, 3280928, 3645472, 4010016, 4374560, 4739104, 5103648, 5468192,
5832736, 6197280, [...]

atom# mount_hammer /dev/ad11s2d /mnt

atom# mount
ROOT on / (hammer, local)
devfs on /dev (devfs, local)
/dev/serno/9SF12T4Y.s2a on /boot (ufs, local)
/pfs/@-1:00001 on /var (null, local)
/pfs/
@-1:00002 on /tmp (null, local)
/pfs/@-1:00003 on /usr (null, local)
/pfs/
@-1:00004 on /home (null, local)
/pfs/@-1:00005 on /usr/obj (null, local)
/pfs/
@-1:00006 on /var/crash (null, local)
/pfs/@@-1:00007 on /var/tmp (null, local)
procfs on /proc (procfs, local)
pgsql on /mnt (hammer, local)

Actions #2

Updated by lentferj about 15 years ago

Bill Hacker schrieb:

Jan Lentfer wrote:

If a partition contains a hammer fs and you newfs it to a UFS you can
afterwards still mount it as hammer fs.
You can even still run hammer info and write data on the partition
(tried with dd).

? 'dd' does not know or care anything about a fs.

What happens if you not only 'newfs' to UFS, but actually write to it
AS a r/w UFS mount? (e.g. - not with 'dd').

If hammer fs can 100% recover from that, there is witchcraft afoot....

Actuall I tried the other way and it worked: You can write data on the
mount_hammer'd UFS with if=/dev/zero of=/mnt/ZEROS (which cares about
the fs), umount it and mount (UFS) it. Then you will not the the created
file with ls. unmount again, mount_hammer, ls and ... surprise ... data
will be there again. That is nice, isn't it.

Cheers

Jan

Actions #3

Updated by wbh about 15 years ago

Jan Lentfer wrote:

Bill Hacker schrieb:

Jan Lentfer wrote:

If a partition contains a hammer fs and you newfs it to a UFS you can
afterwards still mount it as hammer fs.
You can even still run hammer info and write data on the partition
(tried with dd).

? 'dd' does not know or care anything about a fs.

What happens if you not only 'newfs' to UFS, but actually write to
it AS a r/w UFS mount? (e.g. - not with 'dd').

If hammer fs can 100% recover from that, there is witchcraft afoot....

Actuall I tried the other way and it worked: You can write data on the
mount_hammer'd UFS with if=/dev/zero of=/mnt/ZEROS (which cares about
the fs), umount it and mount (UFS) it. Then you will not the the created
file with ls. unmount again, mount_hammer, ls and ... surprise ... data
will be there again. That is nice, isn't it.

Cheers

Jan

hammer fs (and others before it) have a great deal of ability to prevent or
detect unwanted alterations when running 'normally'.

Many fs have 'some' ability to detect malicious/experimental/accidental
offline alterations. IOW - they tend to 'trust' a dirty-bit flag, and take no
action until asked - THEN they may be able to find the damage.

'Some' fs - hammer among them, have at least limited ability to correct,
restore, or at least sequester (lost+found) alterations when detected.

But... unless the 'dirty bit' flag calls for a chkdsk, scandisk, fsck or
equivalent, 'offline' originated damage will ordinarily go undetected until the
next maintenance run, OR at least access is attempted to the altered/damaged
area or file.

IOW - there is nothing that forces a surreptitous / offline alteration to post a
'Kilroy was here' message at the front door.

That has been a fact of life from the time documents were stored on vellum or
papyrus. Applying ECC or checksums is all well and good - but something has to
cause them to be queried. On large enough media, that is usually too costly do
gratuitously do at mount-time. IBM chkdsk on hpfs-386 had astonishgly good
recovery capability for its day. But even the least-extensive of its progressive
levels could add 15 wall-clock minutes to boot time with a mere 2GB to scan.

End result? A fast and robust fs, but totally impractical for large media.

UFS is better: For my (n)atacontrol RAID1, I usually set 'fsck -y' and stand the
pain of a veeeeery looong fsck on large HDD - knowing I'll not get recovery -
only damage-limitation and awareness of a problem.

hammer fs can offer more.

But is has no 'angel' watching from outside the window to tell it you are
messing with it offline.

So - whatever else, you are not onto a 'bug' here - just a fact of life.

Now - if hammer were asked to see if integrity had been compromised - either
by trying to read from that area, or by invoking any of the several
maintenance/admin runs, AND THEN failed to notice what had been done - that
would be another situation entirely.

Bill

Actions #4

Updated by lentferj about 15 years ago

Hi Bill,

but the fact is that you can mount the device as hammer fs and use it
as hammer fs and as UFS AT THE SAME TIME, although you have
"formatted" it as UFS. That should not be.

Cheers,

Jan

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

Actions #5

Updated by wbh about 15 years ago

wrote:

Hi Bill,

but the fact is that you can mount the device as hammer fs and use it as
hammer fs and as UFS AT THE SAME TIME, although you have "formatted" it
as UFS. That should not be.

Cheers,

Jan

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

What you call 'fact' is simply limited information as to what is going on.

'dd' a 'before' and 'after' copy of the first MB or so of it to some other media.

Go onto that media and diff those two files.

Take a hex editor to the area(s) of difference and see what is actually
happening on-disk.

Likewise look into your logs and see what /all.log, ~/messages, ~/dmesg and
/console.log have to tell you.

DragonFlyBSD just might be a tad smarter than first meets the eye...

;-)

Actions #6

Updated by corecode about 15 years ago

Bill Hacker wrote:

wrote:

Hi Bill,

but the fact is that you can mount the device as hammer fs and use it
as hammer fs and as UFS AT THE SAME TIME, although you have
"formatted" it as UFS. That should not be.

Cheers,

Jan

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

What you call 'fact' is simply limited information as to what is going on.

'dd' a 'before' and 'after' copy of the first MB or so of it to some
other media.

Go onto that media and diff those two files.

Take a hex editor to the area(s) of difference and see what is actually
happening on-disk.

Likewise look into your logs and see what ~/all.log, ~/messages, ~/dmesg
and ~/console.log have to tell you.

DragonFlyBSD just might be a tad smarter than first meets the eye...

;-)

Bill, what on earth are you talking about? It is entirely clear what is
happening. That's why it is also clear that this is a problem. Why
should Jan look at his logs? We know what is happening!

Actions #7

Updated by wbh about 15 years ago

Simon 'corecode' Schubert wrote:
snip

Bill, what on earth are you talking about? It is entirely clear what is
happening. That's why it is also clear that this is a problem. Why
should Jan look at his logs? We know what is happening!

Mounting and unmounting a block device with one fs does not necessarily leave a
tell-tale that some other fs even looks for.

Is mount_ufs even hammer_fs aware on DFLY, let alone an(y) other *BSD?

Can mount_hammerfs distinguish between a UFS and hammer layout?

And if not, why on Earth would either fs NOT see the device as what it was told
to expect?

.and does NOTHING throw even a remark into one log or another?

Sounds to me like the same fs type-code is in use, no?

And what about registering hammer fs GPT / GUID/ codes?

Until both of those are hammer fs specific, even if on-disk info and/or DFLY
mount_<whatever> is recoded to determine the difference, any OTHER fs is likely
to remain oblivious.

That is what 'on Earth' I am on about...

Bill

Actions #8

Updated by dillon about 15 years ago

Well, this is basically simply due to the fact that the volume headers
are in different places and HAMMER and UFS's initial data layout winds
up being non-conflicting. But clearly it isn't going to stay that way
for long.

It's a fun exercise and we should probably adjust newfs for UFS and HAMMER
to clean out a few extra megabytes at the beginning of the partition to
ensure that any prior volume header is overwritten, but it isn't really
a bug per-say. The filesystem type in the partition editor is the more
definitive information source.
-Matt
Matthew Dillon
&lt;&gt;
Actions #9

Updated by wbh about 15 years ago

Matthew Dillon wrote:

Well, this is basically simply due to the fact that the volume headers
are in different places and HAMMER and UFS's initial data layout winds
up being non-conflicting. But clearly it isn't going to stay that way
for long.

:-)

.. perusing with the hex editor IS a mite tedious, if highly educational..

It's a fun exercise and we should probably adjust newfs for UFS and HAMMER
to clean out a few extra megabytes at the beginning of the partition to
ensure that any prior volume header is overwritten, but it isn't really
a bug per-say. The filesystem type in the partition editor is the more
definitive information source.

-Matt
Matthew Dillon
<>

Reality is that with the commonality of movable media, it has become more
obvious that every OS on the planet makes 'complacent' - hence potentially
dangerous - assumptions based on partial look at labels and such - legacy MBR +
and newer GPT included. Sometimes on the basis of less than a full Byte.

DFLY cannot boil THAT ocean. Not on its own, anyway.

Also clearly impractical as well as vanishingly irrelevant to worry about what
OS/2 //eCS, MorphOS, ForthOS, BeOS/Haiku, HelenOS, Minix, Plan9, VisopSys,
TuDOS, AoS Bluebottle, Syllable, Menuet, QNX - just to name those on media
within arm's length of my own keyboard - might see or do.

Folk who trifle with those expect to be odd-man out.

But DFLY could at least provide clues that reduce the risk of the most commonly
attached hosts assuming the 'wrong thing', to wit:

- Win / DOS

- Linux

- The *BSD's (Mac especially - as it is near-as-dammit blind by choice..)

And perhaps even:

- Solaris, AIX, HP-UX

None of these have to be made to ID hammer fs, let alone mount it.

Just be 'tricked' into leaving it the f*** alone. E.G. - look like 'unavailable'
or 'weird' - not 'empty space'.

See:

http://www.win.tue.nl/%7Eaeb/partitions/partition_types-1.html

.. but one of many often conflicting resources. And that is another problem.
Dearth of true 'standards'.

If the carefully-crafted longevity-focused hammer fs is really to someday be
trusted with the 'crown jewels', it may help to reduce its exposure to being
totally trashed by the wrong bit of cable accidentally plugged in.....

'BT,DT,GTTS - WBH'

Bill

Actions #10

Updated by tuxillo almost 11 years ago

  • Description updated (diff)
  • Category set to Userland
  • Status changed from New to Feedback
  • Assignee deleted (0)
  • Target version set to 3.8
  • % Done changed from 0 to 100

Hi,

Tried the following:

  1. newfs_hammer -L T1 -f /dev/vkd1s1a

Volume 0 DEVICE /dev/vkd1s1a size 4.00GB
initialize freemap volume 0
initializing the undo map (504 MB)
---------------------------------------------
1 volume total size 4.00GB version 6
boot-area-size: 8.00MB
memory-log-size: 8.00MB
undo-buffer-size: 504.00MB
total-pre-allocated: 0.51GB
fsid: e23fc6e1-989e-11e3-a03f-010162fc1594

NOTE: Please remember that you may have to manually set up a
cron(8) job to prune and reblock the filesystem regularly.
By default, the system automatically runs 'hammer cleanup'
on a nightly basis. The periodic.conf(5) variable
'daily_clean_hammer_enable' can be unset to disable this.
Also see 'man hammer' and 'man HAMMER' for more information.

WARNING: The minimum UNDO/REDO FIFO is 500MB, you really should not
try to format a HAMMER filesystem this small.

WARNING: HAMMER filesystems less than 50GB are not recommended!
You may have to run 'hammer prune-everything' and 'hammer reblock'
quite often, even if using a nohistory mount.
  1. mount /dev/vkd1s1a /mnt
    HAMMER recovery check seqno=000fbfff
    HAMMER recovery range 3000000000000000-3000000000000000
    HAMMER recovery nexto 3000000000000000 endseqno=000fc000
    HAMMER mounted clean, no recovery needed
  2. df -h /mnt
    Filesystem Size Used Avail Capacity Mounted on
    T1 3.5G 176M 3.3G 5% /mnt
  3. newfs vkd1s1a
    vkd1s1a: media size 4093.65MB
    Warning: Block size and bytes per inode restrict cylinders per group to 89.
    Warning: 712 sector(s) in last cylinder unallocated
    /dev/vkd1s1a: 8383800 sectors in 2047 cylinders of 1 tracks, 4096 sectors
    4093.7MB in 23 cyl groups (89 c/g, 178.00MB/g, 22528 i/g)
    super-block backups (for fsck -b #) at:
    32, 364576, 729120, 1093664, 1458208, 1822752, 2187296, 2551840, 2916384, 3280928, 3645472, 4010016, 4374560, 4739104, 5103648, 5468192, 5832736, 6197280,
    6561824, 6926368, 7290912, 7655456, 8020000
  4. newfs vkd1s1a
    vkd1s1a: media size 4093.65MB
    Warning: Block size and bytes per inode restrict cylinders per group to 89.
    Warning: 712 sector(s) in last cylinder unallocated
    /dev/vkd1s1a: 8383800 sectors in 2047 cylinders of 1 tracks, 4096 sectors
    4093.7MB in 23 cyl groups (89 c/g, 178.00MB/g, 22528 i/g)
    super-block backups (for fsck -b #) at:
    32, 364576, 729120, 1093664, 1458208, 1822752, 2187296, 2551840, 2916384, 3280928, 3645472, 4010016, 4374560, 4739104, 5103648, 5468192, 5832736, 6197280,
    6561824, 6926368, 7290912, 7655456, 8020000
  5. mount_hammer /dev/vkd1s1a /mnt
    HAMMER Illegal UNDO TAIL signature at 300000001f7ffff8
    HAMMER recovery failure during seqno backscan
    HAMMER recovery complete
    Failed to recover HAMMER filesystem on mount
    kthread 0x800fc02700 syncer11 has exited
    mount_hammer: mount on /mnt: Input/output error

Note it can't be mounted. This is latest master and (HAMMER version 6).

Maybe I am missing anything in the test?

Cheers,
Antonio Huete

Actions #11

Updated by tuxillo over 10 years ago

  • Category changed from Userland to VM subsystem
  • Status changed from Feedback to Closed

Could not be reproduced and no feedback was provided upon testing.

Actions

Also available in: Atom PDF