Project

General

Profile

Actions

Bug #2768

closed

Slave HAMMER PFSes cannot be exported via NFS

Added by shamaz about 9 years ago. Updated almost 9 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Userland
Target version:
Start date:
01/14/2015
Due date:
% Done:

0%

Estimated time:

Description

The situation I already described in user mailing list: suppose you have
slave HAMMER PFS mounted at /nbackup and want to export it, so you add to
/etc/exports a line like this:

/nbackup -ro -network 192.168.10/24

When you restart mountd daemon you will get these errors in
/var/log/messages:

Jan 14 18:37:24 ressurected mountd2279: can't export /nbackup
Jan 14 18:37:24 ressurected mountd2279: bad exports list line /nbackup
-ro -network 192.168.10/24

Once PFS is upgraded to master, it can be exported fine.


Files

test.c (398 Bytes) test.c shamaz, 01/15/2015 03:37 AM
hammer_inode.c.patch (685 Bytes) hammer_inode.c.patch shamaz, 01/18/2015 01:56 AM
hammer43.patch (810 Bytes) hammer43.patch dillon, 04/17/2015 02:05 PM
Actions #1

Updated by tuxillo about 9 years ago

  • Category set to Userland
  • Assignee set to tuxillo
  • Target version set to 4.2
Actions #2

Updated by shamaz about 9 years ago

  • File test.c added

As tuxillo wrote in user mailing list, mountd fails to export PFS because call to mountctl (2) fails. Playing with vkernel I learned, that presumable all mountctl calls will fail if the first argument is a mountpoint of slave PFS. It is so because kern_mountctl in sys/kern/vfs_syscalls.c returns EINVAL in this condition (VPFSROOT flag is not set):

/*
 * Must be the root of the filesystem
*/
if ((vp->v_flag & (VROOT|VPFSROOT)) == 0) {
vrele(vp);
return (EINVAL);
}

I am sending a simple test which will return 0 if invoked with mountpoint of master PFS and -1 if invoked with mountpoint of slave PFS (or any other directory which is not a mountpoint)

Actions #3

Updated by shamaz about 9 years ago

Sorry, wrong attachment

Actions #4

Updated by shamaz about 9 years ago

  • File deleted (test.c)
Actions #5

Updated by tuxillo about 9 years ago

  • Status changed from New to In Progress

Hi,

Yes exactly, but mountctl(2) is doing the right thing. It needs to make sure its setting flags on exportable mounts.
I'm not sure yet what the solution to this might be.

Cheers,
Antonio Huete

Actions #6

Updated by shamaz about 9 years ago

Hello.

I'm not sure yet what the solution to this might be

Look at this code in src/sys/vfs/hammer/hammer_inode.c:

/*
323 * Only mark as the root vnode if the ip is not
324 * historical, otherwise the VFS cache will get
325 * confused. The other half of the special handling
326 * is in hammer_vop_nlookupdotdot().
327 *
328 * Pseudo-filesystem roots can be accessed via
329 * non-root filesystem paths and setting VROOT may
330 * confuse the namecache. Set VPFSROOT instead.
331 */
332 if (ip->obj_id HAMMER_OBJID_ROOT &&
333 ip->obj_asof hmp->asof) {
334 if (ip->obj_localization == 0)
335 vsetflags(vp, VROOT);
336 else
337 vsetflags(vp, VPFSROOT);
338 }

Slave PFSes are considered "historical" and therefore VPFSROOT flag is not set. If VPFSROOT or VROOT flag is not set, call to mountctl fails. I attach a path which can solve this situation, but I am not sure if this a right thing to do, because I know nothing of this vfs cache and how it can be "confused".

Actions #7

Updated by shamaz about 9 years ago

Hello.

I took the second look at the code and found that VPFSROOT is only used in mount and mountctl system calls (see src/sys/kern/vfs_syscalls.c), so I believe setting this flag on mountpoint of a slave PFS or any other "historical" mount will not hurt anything (like that VFS cache). NFS is also working fine with my patch applied.

I understand that comment in hammer_inode.c (namely this: "Only mark as the root vnode if the ip is not historical, otherwise the VFS cache will get confused.") in the following way:

If we do mount like this: # mount -t null /@0x%016lx:00000 /historical-pfs0-mountpoint do NOT set VROOT flag or vfs cache will get confused.
If we do mounts like these: # mount -t null /
@-1:00000 /current-pfs0 or # mount -t hammer /dev/serno/XXXXX /hammer-mountpoint SET VROOT flag.

So it is irrelevant to VPFSROOT and we can safely set it if ip->obj_id == HAMMER_OBJID_ROOT even if ip->obj_asof != hmp->asof

Is my guess right? Can you make any tests to be sure that I did not break anything? I think I need some help here.

Actions #8

Updated by dillon almost 9 years ago

Well, I am still uncertain as to how this will effect the VFS cache. I've included a new patch that basically does what is suggested... sets VPFSROOT as long as obj_id is HAMMER_OBJID_ROOT. It appears to work in a quick test of a SLAVE. The slave does not appear to be frozen on the export-side. Note that the client-side mount will (should) be a snapshot of the SLAVE as-of when the mount is made, so the client will not see any updates the slave gets until it remounts it. Also, if the history related to the snapshot it does is deleted through normal hammer cleanups, the existing mount will become unstable.

-Matt

Actions #9

Updated by shamaz almost 9 years ago

Seems to be working for me. Thank you. If there is anything I can do, just tell me

Actions #10

Updated by shamaz almost 9 years ago

  • Status changed from In Progress to Closed
Actions

Also available in: Atom PDF