0

I would like a certain network directory automounted via sshfs on resume from suspend. With information from another AU post, I have it (partially) working through a script in /lib/systemd/system-sleep/. I can log to a file the contents of the local mount point at exit from that script (with timestamp), and see that the remote directory has been correctly mounted. However when I check manually in a terminal, the mount point is empty. I can rerun that script with sudo /lib/systemd/system-sleep/20-sshfs post and it mounts correctly again.

It appears that the sshfs command does work from the resume script, yet the directory gets unmounted immediately after. How can I find out where/why? I didn't find anything useful under /var/log. Ubuntu 20.04.

Edit: adding script.

#!/bin/bash
# Run sshfs command after resume
case "$1" in
    post)
        counter=0
        initdircontents=$(ls -A /opt/sbc/sysroot)
        # Space needed after the [[ ! 
        while [[ -z "$(ls -A /opt/sbc/sysroot)" ]]; do
            # Make sure user_allow_other is uncommented in /etc/fuse.conf!
            # https://superuser.com/a/262800/
            # Then, the -o allow_other below is required! This lets users
            # access the mount even if root has mounted it.
            sshfs -o transform_symlinks -o allow_other -o ssh_command="sshpass -p passwd ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" remoteuser@192.168.xxx.xxx:/ /opt/sbc/sysroot
            dircontents=$(ls -A /opt/sbc/sysroot)
            ((counter++))
            if [[ "$counter" -eq 4 ]]; then
                break
            fi
            sleep 0.5
        done
    echo -e "sshfs resumed at $(date).\nInitial dir contents: $initdircontents\nAfter ssfhs: $dircontents\ncounter = $counter\n" > /home/dqbydt/sshfs.log
    echo -e "Dir contents at script exit: $(ls -A /opt/sbc/sysroot)" >> /home/dqbydt/sshfs.log
    ;;

esac

The while loop was to counter another potential unrelated issue, but is not relevant to this problem as can be seen from the log file:

$ cat sshfs.log 
sshfs resumed at Wed 14 Dec 2022 08:32:17 PM.
Initial dir contents: 
After ssfhs: bin
boot
dev
etc
home
lib
lost+found
media
mnt
opt
proc
resize.log
root
run
sbin
snap
srv
sys
tmp
usr
var
counter = 1

Dir contents at script exit: bin boot dev etc home lib lost+found media mnt opt proc resize.log root run sbin snap srv sys tmp usr var

You can see that counter is 1, which means it just went through the loop once. Initial dir contents are empty. After running sshfs, it has mapped the remote dir. This is still in place when the script exits. Yet, after the resume if I do an ls -la /opt/sbc/sysroot, it is empty.

dqbydt
  • 11
  • Since the linked question isn't about sshfs specifically, it might be helpful to include your actual script in the question. – steeldriver Dec 14 '22 at 23:44
  • The original issue is still not resolved, but I found that sshfs itself has an option to reconnect to server if connection is lost: https://askubuntu.com/a/716618/. With that, it does automatically reconnect after resume from suspend. No scripting necessary. – dqbydt Dec 16 '22 at 05:28

0 Answers0