108

How to mount a remote directory using SSH to be available same as if it is a local directory?

Maythux
  • 84,289

7 Answers7

130

First install the module:

sudo apt-get install sshfs

Load it to kernel:

sudo modprobe fuse

Setting permissions (Ubuntu versions < 16.04):

sudo adduser $USER fuse
sudo chown root:fuse /dev/fuse
sudo chmod +x /dev/fusermount

Now we'll create a directory to mount the remote folder in.

I chose to create it in my home directory and call it remoteDir.

mkdir ~/remoteDir

Now I ran the command to mount it (mount on home):

sshfs maythux@192.168.xx.xx:/home/maythuxServ/Mounted ~/remoteDir

Now it should be mounted:

cd ~/remoteDir
ls -l
Eliah Kagan
  • 117,780
Maythux
  • 84,289
  • I'm a little confused... in the sshfs command, I think that the mountpoint local directory is named remoteDir, and when I'm on the ssh serever, there is a dir /home/maythuxServ/Mounted that is not mounted locally, and I can not tell, or care, whether it's mounted elsewhere? – Volker Siegel Nov 23 '14 at 00:56
  • 3
    I skipped some of these steps under 14.04 when I used the following guide: https://help.ubuntu.com/community/SSHFS – Hemm Mar 09 '16 at 19:43
  • 7
    No fuse group needed (Ubuntu 16.04, Nov 2017): https://stackoverflow.com/questions/35635631/ubuntu-15-10-no-fuse-group – Matt Kleinsmith Nov 30 '17 at 00:34
  • Are these commands running on the client or the server? – Jeff Dec 12 '17 at 19:35
  • 3
    On 18.04, I skipped the full 2nd block - setting permissions and it works fine. – optimist Nov 29 '18 at 06:30
  • 4
    Half of this answer either does not work or is outdated. Please consider updating. – Luís de Sousa Feb 28 '19 at 10:27
33

Configure ssh key-based authentication

Generate key pair on the local host.

$ ssh-keygen -t rsa

Accept all sugestions with enter key.

Copy public key to the remote host:

$ ssh-copy-id -i .ssh/id_rsa.pub user@host

Install sshfs

$ sudo apt install sshfs

Mount remote directory

$ sshfs user@host:/remote_directory /local_directory

Don't try to add remote fs to /etc/fstab

Or don't try to mount shares via /etc/rc.local .

In both cases it won't work as the network is not available when init reads /etc/fstab.

Install AutoFS

$ sudo apt install autofs

Edit /etc/auto.master

Comment out the following lines

#+/etc/auto.master.d
#+/etc/auto.master

Add a new line

/- /etc/auto.sshfs --timeout=30

Save and quit

Edit /etc/auto.sshfs

Add a new line

/local_directory -fstype=fuse,allow_other,IdentityFile=/local_private_key :sshfs\#user@remote_host\:/remote_directory

Remote user name is obligatory.

Save and quit

Start autofs in debug mode

$ sudo service autofs stop
$ sudo automount -vf

Observe logs of the remote ssh server

$ ssh user@remote_server
$ sudo tailf /var/log/secure

Check content of the local directory

You should see contents of the remote directory

Start autofs in normal mode

Stop AutoFS running in debug mode with CTRL-C .

Start AutoFS in normal mode

$ sudo service autofs start

Enjoy

(Tested on Ubuntu 14.04)

  • Still good on Ubuntu 18.04. Used it to mount a directory from a Raspberry Pi 2 on a PC. When issuing sudo automount -vf, if you get 1 remaining in /- it is likely because you have already manually mounted the directory at step Mount remote directory. Rebooting should fix the issue. – Fanta Jan 06 '20 at 10:09
  • the part about not adding stuff to /etc/fstab is wrong the option _netdev is made for this case. – Kuhlambo Mar 17 '20 at 09:49
17

Based on my experiments, explicitly creating the fuse group and adding your user to it is NOT required to mount ssh file system.

To summarize, here are the steps copied from this page:

  1. Install sshfs

$ sudo apt-get install sshfs

2.Create local mount point

$ mkdir /home/johndoe/sshfs-path/

3.Mount remote folder /remote/path to /home/johndoe/sshfs-path/

$ sshfs remoteuser@111.222.333.444:/remote/path /home/johndoe/sshfs-path/

  1. And finally, to umount ...

$ fusermount -u /home/johndoe/sshfs-path/

7

Install sshfs

sudo apt-get install sshfs

Add to fstab:

<USER>@<SERVER_NAME>:<server_path> <local_path> fuse.sshfs delay_connect,_netdev,user,idmap=user,transform_symlinks,identityfile=/home/<YOUR_USER_NAME>/.ssh/id_rsa,allow_other,default_permissions,rw,nosuid,nodev,uid=1000,gid=1000,nonempty 0 0
pa4080
  • 29,831
Serg Kryvonos
  • 323
  • 4
  • 8
4

Although it is not answering your question exactly but I just wanted to mention that you can achieve the same goal using "sftp" as well. Just inside your file manager address bar type this command:

sftp://remoteuser@111.222.333.444/remote/path
1

An easy way to run sshfs mounts at startup is also by adding it to the root (or another user's) crontab, like this:

@reboot sshfs remoteuser@111.222.333.444:/remote/path /home/johndoe/sshfs-path/

And if you need to add a delay, you can use:

@reboot sleep 60 && sshfs remoteuser@111.222.333.444:/remote/path /home/johndoe/sshfs-path/
Artur Meinild
  • 26,018
0

I would like to warn that, it seems that by default only the user which set up the mount can access the remote directory.

I set up a remote directory, and create a crontab with sudo crontab -e. Later I found out the backup file didn't write the remote directory at all. Then I found out that I could not cd into the remote disk as root ! So eventually I create the same task with crontab -e and everything works as I expected.

Rick
  • 180