You say server@address
, but really it would be more clear to say user@server
or user@address
. When using ssh
and the like, the part before the @
is the user which you're authenticating.
For example, you could set up the following host entry in your ~/.ssh/config
:
Host servername
Hostname 192.168.1.102
User banana
By default, if you ssh servername
, you will connect as your current local username. With this ssh
configuration present, if you ssh servername
, you will connect to 192.168.1.102 as banana
, but even with this setup, if you were to ssh apple@servername
, you'd connect as apple
instead.
Your problems are compounded by the fact that mounts in fstab
are run by the root user on the local system. This means all the setup in the world as your normal user will not be helpful. For example, when you ssh servername
the first time, you accept the public key of the remote server. This only happens as that user, and if you sudo su -
to root
, and run ssh servername
as root, you will get the same prompt. This is another common problem when setting up sshfs
in fstab
for the first time.
The identifier in the public key has no bearing on authentication. It is simply text to help you identify the specific key on the remote server in case your key is compromised. When you generate a key, you should make this identifier something related to the machine using the key. This way, if the machine is compromised, you can remove this specific key from all of your servers without removing other/all keys, helping you maintain and not lose all access.
More details: SSH Client Configuration
After you reboot, if you run mount
, you should see all your mounted drives. My guess is that you will not see this, since the directory is empty. Just want to mention this, vs successfully mounting an empty directory.
It is possible your drive is not mounting because you have noauto
in your fstab
configuration, and the x-systemd.automount
isn't working properly.
Be sure to run sudo systemctl daemon-reload
after making changes to fstab
.
You can try mount /home/user/Documents/folder
and see if you get a specific error.
If you get no error, it mounts, and you see the files, then this is very likely the case.
Also, you have allow_other
, but you don't have user
option.
The drive could be mounted, but you have no rx
permission to list the files. This is where user
, uid
and gid
come into play.
user@server /mnt/sshfsshare fuse.sshfs user,_netdev,noauto,defaults,rw,x-systemd.automount,x-systemd.device-timeout=2s,Compression=no,cache=yes,kernel_cache,reconnect,uid=1000,gid=1000,idmap=user,allow_other,IdentityFile=/home/user/.ssh/id_rsa 0 2
You are setting identityfile
, which I always type as IdentityFile
, but I don't think the case matters. This means that even if root runs the mount, it knows to access your key. You are even correct to use /home/user
and not ~/
in the IdentityFile path.
The only way this could be a problem is if your key file is encrypted/password protected. In that case, the mount would fail because you haven't entered the password.