Project

General

Profile

Actions

Bug #2915

open

Hammer mirror-copy problem

Added by Anonymous almost 8 years ago. Updated over 7 years ago.

Status:
New
Priority:
High
Assignee:
-
Category:
-
Target version:
Start date:
05/16/2016
Due date:
% Done:

0%

Estimated time:

Description

DragonFly v4.5.0.843.gfe3b7-DEVELOPMENT

When I mirror copy a master to a slave and then upgrade the slave, the new master pfs can't be mirror copied. This is reproducible but only between two distinct hammer filesystems. If you do it all on the same filesystem, the problem doesn't appear to occur.

boojum# hammer pfs-master /pfs/master
Creating PFS #13 succeeded!
/pfs/master
sync-beg-tid=0x0000000000000001
sync-end-tid=0x00000001b44c0b20
shared-uuid=1191b9a4-1bc4-11e6-8e1d-418d5cb760e2
unique-uuid=1191b9aa-1bc4-11e6-8e1d-418d5cb760e2
label=""
prune-min=00:00:00
operating as a MASTER
snapshots directory defaults to /var/hammer/<pfs>
boojum# cp /COPYRIGHT /pfs/master/
boojum# ls l /pfs/master/
total 13
-r--r--r-
1 root wheel 6686 16-May-2016 17:12 COPYRIGHT
boojum# hammer y mirror-copy /pfs/master /volumes/BACKUP3/pfs/slave
PFS slave /volumes/BACKUP3/pfs/slave does not exist. Auto create new slave PFS!
Creating PFS #31 succeeded!
/volumes/BACKUP3/pfs/slave
sync-beg-tid=0x0000000000000001
sync-end-tid=0x0000000000000001
shared-uuid=1191b9a4-1bc4-11e6-8e1d-418d5cb760e2
unique-uuid=2e551d04-1bc4-11e6-8e1d-418d5cb760e2
label=""
prune-min=00:00:00
operating as a SLAVE
snapshots directory defaults to /var/hammer/<pfs>
Prescan to break up bulk transfer
Prescan 1 chunks, total 0 MBytes (7296)
Mirror-read /pfs/master succeeded
boojum# ls -l /volumes/BACKUP3/pfs/slave/
total 13
-r--r--r-
1 root wheel 6686 16-May-2016 17:12 COPYRIGHT
boojum# hammer pfs-upgrade /volumes/BACKUP3/pfs/slave
pfs-upgrade of PFS#31 () succeeded
boojum# hammer -y mirror-copy /volumes/BACKUP3/pfs/slave /volumes/BACKUP3/pfs/slave2
PFS slave /volumes/BACKUP3/pfs/slave2 does not exist. Auto create new slave PFS!
Creating PFS #32 succeeded!
/volumes/BACKUP3/pfs/slave2
sync-beg-tid=0x0000000000000001
sync-end-tid=0x0000000000000001
shared-uuid=1191b9a4-1bc4-11e6-8e1d-418d5cb760e2
unique-uuid=5ef43f68-1bc4-11e6-8e1d-418d5cb760e2
label=""
prune-min=00:00:00
operating as a SLAVE
snapshots directory defaults to /var/hammer/<pfs>
Prescan to break up bulk transfer
Prescan 1 chunks, total 0 MBytes (0)
Mirror-read /volumes/BACKUP3/pfs/slave succeeded
boojum# ls -l /volumes/BACKUP3/pfs/slave2
lrwxr-xr-x 1 root wheel 10 16-May-2016 17:14 /volumes/BACKUP3/pfs/slave2 -> @@0x00000001000420d0:00032
boojum# ls -l /volumes/BACKUP3/pfs/slave2/
ls: /volumes/BACKUP3/pfs/slave2/: No such file or directory
boojum#

Actions #1

Updated by Anonymous almost 8 years ago

Would someone comment on the status of this issue please? I'm not being pushy; I just want to know if it's a low priority.

Actions

Also available in: Atom PDF