Summary:
I recently created a Compute Engine instance (with boot image, ubuntu 24.04), and successfully created and mounted a FileStore share (NFS4) via NFS/mount.
When restoring my data from a 500GB+ .tar file, I found that some (just a few) directories could not be listed nor deleted.
Details:
for problematic directory “foo” in restored ./bar/foo/ I can
$ cd bar
and
ls -1 .
, which produces “foo”. In addition
cd ./bar/foo
works but, then
ls
results in
reading directory '.': Remote I/O error
while trying to remove foo by
cd ./bar; sudo rm -rf foo
( or a variety of similar actions) results in
rm: cannot remove 'foo': Remote I/O error
I CAN mv/rename “foo” to something else (e.g. “baz”) but then I can not delete the “something else”.
A bit of exploration in creating directories, files, and links shows that after moving foo to a new name, I can recreate a new subdirectory called foo, and then create regular files and links in the recreated foo. However, when I create a symbolic link in foo AND the that link’s target file name length exceeds 127 (!) , the problem manifests, then any preexisting contents of foo become unreadable and foo itself becomes undeletable (as above) .
This is not intermittent, I have over a dozen examples of this behavior.
Request (Help!):
- How can I manage links with targets longer than 127 characters on FileStore with NFS4.1 ? (so I can usefully restore my tar files. The archive command (tar -xvf ) seems to complete without error and the archive contains a few links with long targets, generally long absolute paths)
- How can I delete these old directories and release their content (short of a backup and restore to a new fileshare). Given that any attempt to remove them (or rename / remove) results in “rm: cannot remove ‘foo’: Remote I/O error” [considered alternative seems backup and restore to a new FS share, backup may simply not save the problematic links].
Note this problem does not arise on AWS EC2/EFS/NFS4, nor on my local directories local Ubuntu 20.04 VM.
Many thanks in advance,