20

Is there an easy way to transfer files between two SSH/SFTP servers? The perfect solution would be FileZilla, but it only lets you create a connection between local and remote, but not between remote and remote.

Theoretically I could open two Nautilus windows and connect to some ssh://server1/path/to/folder and ssh://server2/path/to/folder and then just pull the files from one to the other side. My experience is that this is very unstable. Transmitting files in size sum of i.e. 10MB is no problem, but transferring i.e. 10GB often resulted in Nautilus hanging itself up and remaining there in need of ps -e | grep nautilus -> kill -9 <pid>. I also tested the same thing with Nemo and Caja. While Nemo tends to be more stable than the two others, it still is not perfect and also breaks from time to time. FileZilla is extremely stable, never really got it to break, but it is not very flexible due to the mentioned fact that it can only connect to a single SSH server.

Of course I could also mount a folder with sshfs, but this is kind of an inconvenient solution. Too much pre-work to do to get a simple transfer running.

Is there any app that can handle transfers between two SSH servers without breaking? Perfect would be something like FileZilla, that picks up the job again if the connection got interrupted.

Solomon Ucko
  • 157
  • 1
  • 7
Socrates
  • 2,473
  • Not an answer because this is not a software recommendation site, but I've been using the (commercial) Beyond Compare (https://www.scootersoftware.com/) for years, and it's just great for this kind of task. It offers two windows, both of which can show a local path or an sftp:// URL, will show the differences between folders, and its ability to copy just the differences makes an excellent resume-mechanism if it breaks, which happens very rarely in my experience. (Not afflilated with them except being a satisfied customer). – Guntram Blohm Feb 07 '19 at 17:13

4 Answers4

36

If you are on an Ubuntu version that is still supported, then your scp command will provide the -3 switch which enables copying files from remote1 to remote2 via localhost:

me@local:~> scp -3 user1@remote1:/path/to/file1 user2@remote2:/path/to/file2

You can also omit the -3 switch, but then you will need the public key (id_rsa.pub) of user1@remote1 in the file authorized_keys of user2@remote2:

me@local:~> scp user1@remote1:/path/to/file1 user2@remote2:/path/to/file2

scp then under the hood does a ssh user1@remote1 first and from there scp /path/to/file1 user2@remote2:/path/to/file2. That's why the credential must be distributed different from the -3 solution.

In other words:

  • scp -3 remote1:file1 remote2:file2 transfers the file from remote1 to localhost and then back to remote2. The data travels remote1 → localhost → remote2. The localhost is the 3rd party in this scenario, hence -3. For this to work, you will need the credentials from localhost on both remote1 and remote2 because localhost connects to both of them.

  • scp remote1:file1 remote2:file2 copies the file directly from remote1 to remote2 at the speed with wich they are connected to each other. localhost is not involved here (besides issuing the command). The data travels remote1 → remote2. For this to work, you will need the credentials from localhost only on remote1 but additionally you need the credentials of remote1 on remote2 because localhost connects to remote1 only and then remote1 connects to remote2.

If possible I would choose the second approach. As some comments already say: usually often the network cable between remote1 and remote2 is far thicker than the cable between them and localhost.

PerlDuck
  • 13,335
  • 2
    That's just beautiful. ssh is the Swiss army knife of software. Thanks, I learned something. – Organic Marble Feb 06 '19 at 18:57
  • 4
    Note that this approach, like the nautilus approach described in the question, will transfer the file first to the local machine, then up to the second server. This will cause significant slowdown when the two remote servers have a faster link between them than the local machine does to either. (For example, when the remote servers are in datacentres and the local machine has a DSL connection.) – Stobor Feb 07 '19 at 00:53
  • 1
    @Stobor Good point, thank you. I updated my answer to clarify a bit how the data travels with and without the -3. – PerlDuck Feb 07 '19 at 11:56
  • 1
    Would the second method work with agent forwarding, without having any key or password on remote1? – Eric Duminil Feb 07 '19 at 12:11
  • 1
    @EricDuminil I'm afraid I cannot tell. I have no real exerience with agent forwarding. But I doubt it because remote1 is supposed to deny access when neither key nor password are supplied, isn't it? – PerlDuck Feb 07 '19 at 12:15
  • @PerlDuck: If I understand correctly, it goes like this : key on localhost, agent on localhost, agent forwarding on remote1, no key or agent on remote 1 or 2. It should allow to connect to remote2 from remote1 without needing to trust remote1. – Eric Duminil Feb 07 '19 at 12:32
  • @EricDuminil yes, this is one of the usecases for which agent-forwarding is supposed to work. – Stobor Feb 07 '19 at 23:53
10

In most cases, two ssh servers can reach each other (or at least one can reach the other), and again in most cases the workstation's internet is far worse than either of the servers.

If so, ordering one server to transfer to the other one is the way to go.

ssh server1 nohup scp somefile server2:somefile

Check nohup.out on server1 for errors.

If server reachability is the other way around you can reverse which machine is the master:

ssh server2 nohup scp server1:somefile somefile
Joshua
  • 709
  • 3
  • 8
7

Perhaps you could use one of several GUI front-ends to rsync:

Is there any GUI application for command rsync?

Or perhaps you could use rsync directly from the command line to connect to both remote servers:

"How to rsync files between two remotes"

I often log in to one server with ssh, then from that server's command line use rsync to push or pull files to the other remote server -- that's generally much quicker than trying to transfer the files through some 3rd computer.

The rsync is smart enough to do some work, then if anything goes wrong and interrupts the process, it can later resume right where it left off.

0

You need to use SCP protocol. scp file you want to transfer login@address_of_second_server:/path_where_you_want_to_save

MrSnowMan
  • 225