Bug #2768
closedSlave HAMMER PFSes cannot be exported via NFS
Description
The situation I already described in user mailing list: suppose you have
slave HAMMER PFS mounted at /nbackup and want to export it, so you add to
/etc/exports a line like this:
/nbackup -ro -network 192.168.10/24
When you restart mountd daemon you will get these errors in
/var/log/messages:
Jan 14 18:37:24 ressurected mountd2279: can't export /nbackup
Jan 14 18:37:24 ressurected mountd2279: bad exports list line /nbackup
-ro -network 192.168.10/24
Once PFS is upgraded to master, it can be exported fine.
Files
Updated by tuxillo almost 10 years ago
- Category set to Userland
- Assignee set to tuxillo
- Target version set to 4.2
Updated by shamaz almost 10 years ago
- File test.c added
As tuxillo wrote in user mailing list, mountd fails to export PFS because call to mountctl (2) fails. Playing with vkernel I learned, that presumable all mountctl calls will fail if the first argument is a mountpoint of slave PFS. It is so because kern_mountctl in sys/kern/vfs_syscalls.c returns EINVAL in this condition (VPFSROOT flag is not set):
/*
* Must be the root of the filesystem
*/
if ((vp->v_flag & (VROOT|VPFSROOT)) == 0) {
vrele(vp);
return (EINVAL);
}
I am sending a simple test which will return 0 if invoked with mountpoint of master PFS and -1 if invoked with mountpoint of slave PFS (or any other directory which is not a mountpoint)
Updated by tuxillo almost 10 years ago
- Status changed from New to In Progress
Hi,
Yes exactly, but mountctl(2) is doing the right thing. It needs to make sure its setting flags on exportable mounts.
I'm not sure yet what the solution to this might be.
Cheers,
Antonio Huete
Updated by shamaz almost 10 years ago
- File hammer_inode.c.patch hammer_inode.c.patch added
Hello.
I'm not sure yet what the solution to this might be
Look at this code in src/sys/vfs/hammer/hammer_inode.c:
/*
323 * Only mark as the root vnode if the ip is not
324 * historical, otherwise the VFS cache will get
325 * confused. The other half of the special handling
326 * is in hammer_vop_nlookupdotdot().
327 *
328 * Pseudo-filesystem roots can be accessed via
329 * non-root filesystem paths and setting VROOT may
330 * confuse the namecache. Set VPFSROOT instead.
331 */
332 if (ip->obj_id HAMMER_OBJID_ROOT &&
333 ip->obj_asof hmp->asof) {
334 if (ip->obj_localization == 0)
335 vsetflags(vp, VROOT);
336 else
337 vsetflags(vp, VPFSROOT);
338 }
Slave PFSes are considered "historical" and therefore VPFSROOT flag is not set. If VPFSROOT or VROOT flag is not set, call to mountctl fails. I attach a path which can solve this situation, but I am not sure if this a right thing to do, because I know nothing of this vfs cache and how it can be "confused".
Updated by shamaz almost 10 years ago
Hello.
I took the second look at the code and found that VPFSROOT is only used in mount and mountctl system calls (see src/sys/kern/vfs_syscalls.c), so I believe setting this flag on mountpoint of a slave PFS or any other "historical" mount will not hurt anything (like that VFS cache). NFS is also working fine with my patch applied.
I understand that comment in hammer_inode.c (namely this: "Only mark as the root vnode if the ip is not historical, otherwise the VFS cache will get confused.") in the following way:
If we do mount like this: # mount -t null /@0x%016lx:00000 /historical-pfs0-mountpoint do NOT set VROOT flag or vfs cache will get confused.
@-1:00000 /current-pfs0 or # mount -t hammer /dev/serno/XXXXX /hammer-mountpoint SET VROOT flag.
If we do mounts like these: # mount -t null /
So it is irrelevant to VPFSROOT and we can safely set it if ip->obj_id == HAMMER_OBJID_ROOT even if ip->obj_asof != hmp->asof
Is my guess right? Can you make any tests to be sure that I did not break anything? I think I need some help here.
Updated by dillon over 9 years ago
- File hammer43.patch hammer43.patch added
- Assignee changed from tuxillo to dillon
Well, I am still uncertain as to how this will effect the VFS cache. I've included a new patch that basically does what is suggested... sets VPFSROOT as long as obj_id is HAMMER_OBJID_ROOT. It appears to work in a quick test of a SLAVE. The slave does not appear to be frozen on the export-side. Note that the client-side mount will (should) be a snapshot of the SLAVE as-of when the mount is made, so the client will not see any updates the slave gets until it remounts it. Also, if the history related to the snapshot it does is deleted through normal hammer cleanups, the existing mount will become unstable.
-Matt
Updated by shamaz over 9 years ago
Seems to be working for me. Thank you. If there is anything I can do, just tell me