257

I have an Ubuntu server to which I am connecting using SSH.

I need to upload files from my machine into /var/www/ on the server, the files in /var/www/ are owned by root.

Using PuTTY, after I log in, I have to type sudo su and my password first in order to be able to modify files in /var/www/.

But when I am copying files using WinSCP , I can't create create/modify files in /var/www/, because the user I'm connecting with does not have permissions on files in /var/www/ and I can't say sudo su as I do in case of an ssh session.

Do you know how i could deal with this ?

If I was working on my local machine, I would call gksudo nautilus but in this case I only have terminal access to the machine.

Jorge Castro
  • 71,754
Dimitris Sapikas
  • 2,705
  • 2
  • 15
  • 9

14 Answers14

161

You're right, there is no sudo when working with scp. A workaround is to use scp to upload files to a directory where your user has permissions to create files, then log in via ssh and use sudo to move/copy files to their final destination.

scp -r folder/ user@server.tld:/some/folder/you/dont/need/sudo
ssh user@server.tld
 $ sudo mv /some/folder /some/folder/requiring/perms 
# YOU MAY NEED TO CHANGE THE OWNER like:
# sudo chown -R user:user folder

Another solution would be to change permissions/ownership of the directories you uploading the files to, so your non-privileged user is able to write to those directories.

Generally, working in the root account should be an exception, not a rule - the way you phrasing your question makes me think maybe you're abusing it a bit, which in turn leads to problems with permissions - under normal circumstances you don't need super-admin privileges to access your own files.

Technically, you can configure Ubuntu to allow remote login directly as root, but this feature is disabled for a reason, so I would strongly advice you against doing that.

Sergey
  • 43,665
  • i didn't get the first solution , could you please be a litle more spesific ? – Dimitris Sapikas Oct 30 '12 at 09:42
  • Whe i say my own files i mean /var/www , i am using my vps as web server .... on my own folder i have full access – Dimitris Sapikas Oct 30 '12 at 09:44
  • 7
    Re. the first solution. 1. scp -R mysite dimitris@myserver.com:/home/dimitris/ 2. ssh dimitris@myserver.com 3. sudo mv ~/mysite /var/www - it's a 2-step process, first you scp the files to your home dir, then you log in via ssh and copy/move the files to where they should be – Sergey Oct 30 '12 at 21:58
  • It's ludicrous you can't setup a root account to use scp for root owned files. sudo simply does not work. Who owns my computer me or Canonical? It's just as bad as Android and Google from that respect. IMHO. I just want to copy files from /usr/local/bin from one machine I OWN to another machine I OWN. – WinEunuuchs2Unix Jun 23 '20 at 02:21
  • @WinEunuuchs2Unix: indeed you can to set up your machines so you can SSH in as root, all it takes is a small change in a config. Once you do that you can scp files as root as much as you want. – Sergey Jun 25 '20 at 02:46
  • "there is no sudo when working with scp", this is actually not completely true. You can use sudo scp.. to gain access to local root files. You can therefore ssh into the source machine, use and use sudo scp from there to get root files to or from a remote machine. – Elliptical view Aug 08 '20 at 01:50
  • @Ellipticalview:That's a clever idea but it won't work in many scenarios (the source files are on a local machine behind NAT, no SSH server on the source machine, the files on the source machine are owned by root too :) ) – Sergey Aug 09 '20 at 18:30
138

Quick way

From server to local machine:

ssh user@server "sudo cat /etc/dir/file" > /home/user/file

From local machine to server:

cat /home/user/file | ssh user@server "sudo tee -a /etc/dir/file"
  • 23
    This answer is underrated. It's simple, clean, reads or writes a root file with a single atomic operation, and requires nothing that's not already guaranteed to be there if you're using scp. Main drawback is that it does not copy permissions. If you want that, the tar solution is better.

    This is a powerful technique, particularly if combined with xargs/bash magic to traverse paths..

    – markgo2k May 16 '18 at 18:12
  • I think the question was about uploading a file from local to remote and not vice versa – Korayem Dec 19 '18 at 06:53
  • 3
    Beautifully done. This is exactly what I was looking for both up and down. Thank you – Jeremy Mar 28 '19 at 16:32
  • 3
    Deserves to be the top answer, for sure. – Andrew Watson Jun 19 '19 at 14:07
  • 2
    Can this be done for a directory instead of a file as well? – lucidbrot Aug 11 '19 at 15:06
  • 3
    ...but you will need to pass -t to ssh, unless {a number of prerequisite assumptions}. Otherwise it will error with sudo: no tty present and no askpass program specified. And tee -a? The question is about copying, not about appending and at the same time echoing back to stdout. – conny Aug 21 '19 at 07:33
  • 1
    I will add that another way around sudo: no tty present and no askpass program specified is to specify "sudo -S cat /etc/dir/file" rather than just "sudo cat /etc/dir/file" – rh16 Jul 21 '20 at 02:05
  • The sudo -S cat ... trick is great, but be aware, it WILL DISPLAY YOUR PASSWORD on the screen. – bitinerant Dec 04 '20 at 00:45
  • 1
    Another thing to have in mind is if the server has a display banner will be added to the file, as well as the message Connection to xxx.xxx.xxx.xxx closed. In order to avoid that happening you need to add -o LogLevel=QUIET to the ssh command. The command would be then ssh -o LogLevel=QUIET -t user@server "sudo cat /etc/dir/file" > /home/user/file. – Ernesto Allely Jan 27 '21 at 16:02
  • any way to do this for entire folders? – Fed Sep 21 '21 at 22:21
  • how to do this for an entire directory: ssh user@server "tar -cf - dir" | tar -x – German Garcia Jan 11 '22 at 22:17
  • 1
    Thanks! Does the job perfectly. Be aware that the tee -a flag is for appending content . You need to get rid of it to overwrite content. And to be compliant with ShellCheck SC2002 is more efficient to replace cat with redirection mechanism: ssh user@server "sudo tee -a /etc/dir/file" < /home/user/file. – Martin Tovmassian Sep 21 '22 at 13:51
47

Another method is to copy using tar + ssh instead of scp:

tar -c -C ./my/local/dir \
  | ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www"
IBBoard
  • 237
  • 2
    This is the best way to do it. – mttdbrd Mar 30 '15 at 20:31
  • 3
    I can't get this method to work successfully. As written I get sudo: sorry, you must have a tty to run sudo. If I add "-t" to allocate a TTY then I get Pseudo-terminal will not be allocated because stdin is not a terminal.. I can't see this working without passwordless sudo. – IBBoard Oct 26 '15 at 16:35
  • 1
    @IBBoard : try the solution here using ssh -t: ssh -t dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www" – Alexander Bird Aug 18 '16 at 15:21
  • 2
    @AlexanderBird While that works in many cases, I'm not sure it works here because we're trying to pipe a tarball over the SSH connection. See https://serverfault.com/questions/14389/when-would-ssh-t-not-be-appropriate-instead-of-ssh – IBBoard Aug 19 '16 at 19:36
  • This is what finally worked for me. You don't have permissions to a remote file that you want to copy to local, do a sudo tar, archive it, change permissions using chmod and chown, and then copy it to local. Especially if it's a directory. – forumulator Aug 09 '18 at 09:07
  • tar: Cowardly refusing to create an empty archive I think I'm selecting folders incorrectly is 'dir' the folder to be compressed? Is '/var/www/dir' the folder that will be created by uncompressing? – przemo_li Nov 21 '18 at 13:34
  • If you're already in the directory you want to tar up, use tar -c * | ssh dimitris@myserver.com "sudo tar -x --no-same-owner -C /var/www" – jaygooby Jan 15 '19 at 15:18
  • Building on the above, what I tend to do is: tar -cf - * | ssh user@myserver.com sudo /bin/bash -c '( cd /var/www && tar -xf - --no-same-owner )', this is typically because /bin/bash is listed in sudoers but tar might not be at $CORP, or sometimes it is su - that is listed. It can get uglier :) – Ed Neville Mar 14 '19 at 10:54
35

You can also use ansible to accomplish this.

Copy to remote host using ansible's copy module:

ansible -i HOST, -b -m copy -a "src=SRC_FILEPATH dest=DEST_FILEPATH" all

Fetch from remote host using ansible's fetch module:

ansible -i HOST, -b -m fetch -a "src=SRC_FILEPATH dest=DEST_FILEPATH flat=yes" all

NOTE:

  • The comma in the -i HOST, syntax is not a typo. It is the way to use ansible without needing an inventory file.
  • -b causes the actions on the server to be done as root. -b expands to --become, and the default --become-user is root, with the default --become-method being sudo.
  • flat=yes copies just the file, doesn't copy whole remote path leading to the file
  • Using wildcards in the file paths isn't supported by these ansible modules.
  • Copying a directory is supported by the copy module, but not by the fetch module.

Specific Invocation for this Question

Here's an example that is specific and fully specified, assuming the directory on your local host containing the files to be distributed is sourcedir, and that the remote target's hostname is hostname:

cd sourcedir && \
ansible \
   --inventory-file hostname, \ 
   --become \
   --become-method sudo \
   --become-user root \
   --module-name copy \
   --args "src=. dest=/var/www/" \
   all

With the concise invocation being:

cd sourcedir && \
ansible -i hostname, -b -m copy -a "src=. dest=/var/www/" all

P.S., I realize that saying "just install this fabulous tool" is kind of a tone-deaf answer. But I've found ansible to be super useful for administering remote servers, so installing it will surely bring you other benefits beyond deploying files.

  • I like the this answer but I recommend you direct it at the asked question versus more generalized commentary before upvote. something like ansible -i "hostname," all -u user --become -m copy -a ... – Mike D Feb 16 '16 at 19:30
  • @MikeD: how do the above changes look? – erik.weathers Feb 19 '16 at 03:08
  • better but it's fails, --module-name is the correct switch name. and since hostname is the only host in your inventory, you can just say all – Mike D Feb 19 '16 at 20:40
  • ah, definitely a great point with all, I've updated all 3 cmds with that. And good catch with --module-name instead of --module. – erik.weathers Feb 19 '16 at 21:01
  • 1
    Would something like -i 'host,' be valid syntax? I think it's easy to lose punctuation like that when reading a command. (For the reader I mean, if not the shell.) – mwfearnley Jun 10 '16 at 15:37
  • hi @mwfearnley: I'm not sure I follow the question. The syntax is valid, in that it is the only way supported by ansible to directly specify the hostnames without using an inventory file. I agree that it's not obvious nor intuitive, but "it is what it is". ¯\(ツ) – erik.weathers Jun 11 '16 at 18:21
  • I'm just wondering if the host, could be validly wrapped in quotes, just to show someone reading a script that: a) yes, there's supposed to be a comma there; and: b) don't worry, the extra punctuation won't, and can't, get misinterpreted by the shell. – mwfearnley Jun 11 '16 at 19:43
  • 1
    @mwfearnley: sure, the shell will treat -i 'host,' and same as -i host, or -i "host,". In general I prefer to keep these invocations as short as possible to keep them from being daunting, but you should feel free to make it as verbose and explicit as you think is needed for clarity. – erik.weathers Jun 15 '16 at 03:00
  • 2
    Way to go thinking outside the box! Great use of Ansible – jonatan Sep 16 '18 at 09:19
  • If you got something like "msg": "Missing sudo password" you can specify sudo user by appending --extra-vars "ansible_become_pass=yourPasswor" after -b – Mohamed Sep 26 '20 at 23:42
  • @Mohamed : interesting workaround. My instinctual reaction is that you should be careful specifying your password in clear text like this. i.e., Ensure it's not getting written to your shell history & that you are not running on a shared server, since the parameter would be visible in the process table. Also make sure ansible isn't logging / storing this anywhere locally or on the remote server. – erik.weathers Oct 05 '20 at 07:31
  • This should be the best answer. I just added -K which asks for sudo password. – Md. Minhazul Haque May 01 '21 at 20:17
25

May be the best way is to use rsync (Cygwin/cwRsync in Windows) over SSH?

For example, to upload files with owner www-data:

rsync -a --rsync-path="sudo -u www-data rsync" path_to_local_data/ login@srv01.example.com:/var/www

In your case, if you need root privileges, command will be like this:

rsync -a --rsync-path="sudo rsync" path_to_local_data/ login@srv01.example.com:/var/www

See: scp to remote server with sudo.

kenorb
  • 10,347
12

When you run sudo su, any files you create will be owned by root, but it is not possible by default to directly log in as root with ssh or scp. It is also not possible to use sudo with scp, so the files are not usable. Fix this by claiming ownership over your files:

Assuming your user name was dimitri, you could use this command.

sudo chown -R dimitri:dimitri /home/dimitri

From then on, as mentioned in other answers, the "Ubuntu" way is to use sudo, and not root logins. It is a useful paradigm, with great security advantages.

  • i am using this solution any way , but what if i could get full access to my own file system , i don't want to type sudo chow ... for every single directory :S – Dimitris Sapikas Oct 30 '12 at 09:47
  • 4
    Changing ownership of all system files to the user for passing convenience is highly discouraged. It allows any userspace bug you might encounter to severely compromise the security of your system. It is much better to change the ownership of the files that you need to change or update by SCP, but to leave everything else owned by root (like it is supposed to be). That said, the -R in chown tells it to change the ownership of that directory, and all children files and directories recursively... so you can do anything you like. – le3th4x0rbot Oct 30 '12 at 17:48
  • hmm .... that seems working fine , thank you !

    sorry i can't upvote (system does not allow me to do ...)

    – Dimitris Sapikas Oct 30 '12 at 19:05
7

If you use the OpenSSH tools instead of PuTTY, you can accomplish this by initiating the scp file transfer on the server with sudo. Make sure you have an sshd daemon running on your local machine. With ssh -R you can give the server a way to contact your machine.

On your machine:

ssh -R 11111:localhost:22 REMOTE_USERNAME@SERVERNAME

In addition to logging you in on the server, this will forward every connection made on the server's port 11111 to your machine's port 22: the port your sshd is listening on.

On the server, start the file transfer like this:

cd /var/www/
sudo scp -P 11111 -r LOCAL_USERNAME@localhost:FOLDERNAME .
bergoid
  • 71
4

Here's a modified version of Willie Wheeler's answer that transfers the file(s) via tar but also supports passing a password to sudo on the remote host.

(stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) \
  | ssh remote_host "sudo -S bash -c \"tar -C /var/www/ -xz; echo\""

The little bit of extra magic here is the -S option to sudo. From the sudo man page:

-S, --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character.

Now we actually want the output of tar to be piped into ssh and that redirects the stdin of ssh to the stdout of tar, removing any way to pass the password into sudo from the interactive terminal. (We could use sudo's ASKPASS feature on the remote end but that is another story.) We can get the password into sudo though by capturing it in advance and prepending it to the tar output by performing those operations in a subshell and piping the output of the subshell into ssh. This also has the added advantage of not leaving an environment variable containing our password dangling in our interactive shell.

You'll notice I didn't execute 'read' with the -p option to print a prompt. This is because the password prompt from sudo is conveniently passed back to the stderr of our interactive shell via ssh. You might wonder "how is sudo executing given it is running inside ssh to the right of our pipe?" When we execute multiple commands and pipe the output of one into another, the parent shell (the interactive shell in this case) executes each command in the sequence immediately after executing the previous. As each command behind a pipe is executed the parent shell attaches (redirects) the stdout of the left-hand side to the stdin of the right-hand side. Output then becomes input as it passes through processes. We can see this in action by executing the entire command and backgrounding the process group (Ctrl-z) before typing our password, and then viewing the process tree.

$ (stty -echo; read passwd; stty echo; echo $passwd; tar -cz foo.*) | ssh 
remote_host "sudo -S bash -c \"tar -C /var/www/ -xz; echo\""
[sudo] password for bruce: 
[1]+  Stopped                 ( stty -echo; read passwd; stty echo; echo 
$passwd; tar -cz foo.* ) | ssh remote_host "sudo -S bash -c \"tar -C 
/var/www/ -xz; echo\""

$ pstree -lap $$
bash,7168
  ├─bash,7969
  ├─pstree,7972 -lap 7168
  └─ssh,7970 remote_host sudo -S bash -c "tar -C /var/www/ -xz; echo"`

Our interactive shell is PID 7168, our subshell is PID 7969 and our ssh process is PID 7970.

The only drawback is that read will accept input before sudo has time to send back it's prompt. On a fast connection and fast remote host you won't notice this but you might if either is slow. Any delay will not affect the ability to enter the prompt; it just might appear after you have started typing.

Note I simply added a host file entry for "remote_Host" to my local machine for the demo.

Bruce
  • 191
  • 1
  • 4
1

The scp command can't do what you are asking.

The approach I take is to use sudo on the local or remote system (when it's needed to read or writ a file).

Copy remote root access only file to local root access only file

For example, let's say there is a privileged file /etc/pki/private/identity.pem on a remote system and you want to copy it to your local system. The file requires root/sudo access to read or write the file.

To do that, I use this command pattern:

ROOT_ONLY_FILE=/etc/pki/private/identity.pem
ssh host sudo ls -l $ROOT_ONLY_FILE
ssh host sudo cat $ROOT_ONLY_FILE | sudo tee $ROOT_ONLY_FILE > /dev/null
sudo chmod 600 $ROOT_ONLY_FILE
# sudo chown root:grp $ROOT_ONLY_FILE

It's very important to keep the owner, group and read-write modes set properly when copying these privileged files. That is why I start with the ls -l of the remote file.

The chmod and chown commands are intended to make sure the copy of the file has the same permissions as the original file.

Copy remote root access only file to local file

ssh host sudo cat /var/log/messages | cat > /tmp/host.messages

Using sudo cat $file lets you read the file. You can't use scp because scp would not allow access to that privileged file but sudo cat runs on the remote system as root so it works.

Copy local file to remote file that requires special access only

There are times when you want to update a remote file that is owned by tomcat or another user. In this case, you can still use sudo because root is allowed to write to the remote file and doesn't change the files ownership or permission bits when it writes a file that already exists.

The command is:

cat logging.properties | ssh sudo tee /home/tomcat/conf/logging.properties > /dev/null

Copying directories using tar

While the question only asked about files it's important to realize that this approach can also work with directories.

See https://unix.stackexchange.com/a/10028/119816 for more details.

PatS
  • 115
1

You may use script I've written being inspired by this topic:

touch /tmp/justtest && scpassudo /tmp/justtest remoteuser@ssh.superserver.com:/tmp/

but this requires some crazy stuff (which is btw. automatically done by script)

  1. server which file is being sent to will no longer ask for password while establishing ssh connection to source computer
  2. due to necessarility of lack of sudo prompt on the server, sudo will no longer ask for password on remote machine, for user

Here goes the script:

interface=wlan0
if [[ $# -ge 3 ]]; then interface=$3; fi
thisIP=$(ifconfig | grep $interface -b1 | tail -n1 | egrep -o '[0-9.]{4,}' -m1 | head -n 1)
thisUser=$(whoami)
localFilePath=/tmp/justfortest
destIP=192.168.0.2
destUser=silesia
#dest 
#destFolderOnRemoteMachine=/opt/glassfish/glassfish/
#destFolderOnRemoteMachine=/tmp/

if [[ $# -eq 0 ]]; then 
echo -e "Send file to remote server to locatoin where root permision is needed.\n\tusage: $0 local_filename [username@](ip|host):(remote_folder/|remote_filename) [optionalInterface=wlan0]"
echo -e "Example: \n\ttouch /tmp/justtest &&\n\t $0 /tmp/justtest remoteuser@ssh.superserver.com:/tmp/ "
exit 1
fi

localFilePath=$1

test -e $localFilePath 

destString=$2
usernameAndHost=$(echo $destString | cut -f1 -d':')

if [[ "$usernameAndHost" == *"@"* ]]; then
destUser=$(echo $usernameAndHost | cut -f1 -d'@')
destIP=$(echo $usernameAndHost | cut -f2 -d'@')
else
destIP=$usernameAndHost
destUser=$thisUser
fi

destFolderOnRemoteMachine=$(echo $destString | cut -f2 -d':')

set -e #stop script if there is even single error

echo 'First step: we need to be able to execute scp without any user interaction'
echo 'generating public key on machine, which will receive file'
ssh $destUser@$destIP 'test -e ~/.ssh/id_rsa.pub -a -e ~/.ssh/id_rsa || ssh-keygen -t rsa'
echo 'Done'

echo 'Second step: download public key from remote machine to this machine so this machine allows remote machine (this one receiveing file) to login without asking for password'

key=$(ssh $destUser@$destIP 'cat ~/.ssh/id_rsa.pub')
if ! grep "$key" ~/.ssh/authorized_keys; then
echo $key >> ~/.ssh/authorized_keys
echo 'Added key to authorized hosts'
else
echo "Key already exists in authorized keys"
fi

echo "We will want to execute sudo command remotely, which means turning off asking for password"
echo 'This can be done by this tutorial http://stackoverflow.com/a/10310407/781312'
echo 'This you have to do manually: '
echo -e "execute in new terminal: \n\tssh $destUser:$destIP\nPress enter when ready"
read 
echo 'run there sudo visudo'
read
echo 'change '
echo '    %sudo   ALL=(ALL:ALL) ALL'
echo 'to'
echo '    %sudo   ALL=(ALL:ALL) NOPASSWD: ALL'
echo "After this step you will be done."
read

listOfFiles=$(ssh $destUser@$destIP "sudo ls -a")

if [[ "$listOfFiles" != "" ]]; then 
echo "Sending by executing command, in fact, receiving, file on remote machine"
echo 'Note that this command (due to " instead of '', see man bash | less -p''quotes'') is filled with values from local machine'
echo -e "Executing \n\t""identy=~/.ssh/id_rsa; sudo scp -i \$identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"" \non remote machine"
ssh $destUser@$destIP "identy=~/.ssh/id_rsa; sudo scp -i \$identy $(whoami)@$thisIP:$(readlink -f $localFilePath) $destFolderOnRemoteMachine"
ssh $destUser@$destIP "ls ${destFolderOnRemoteMachine%\\\\n}/$(basename $localFilePath)"
if [[ ! "$?" -eq 0 ]]; then echo "errror in validating"; else echo -e "SUCCESS! Successfully sent\n\t$localFilePath \nto \n\t$destString\nFind more at http://arzoxadi.tk"; fi
else
echo "something went wrong with executing sudo on remote host, failure"

fi
ENDOFSCRIPT
) | sudo tee /usr/bin/scpassudo && chmod +x /usr/bin/scpassudo
test30
  • 517
1

You can combine ssh, sudo and e.g tar to transfer files between servers without being able to log in as root and not having the permission to access the files with your user. This is slightly fiddly, so I've written a script to help this. You can find the script here: https://github.com/sigmunau/sudoscp

or here:

#! /bin/bash
res=0
from=$1
to=$2
shift
shift
files="$@"
if test -z "$from" -o -z "$to" -o -z "$files"
then
    echo "Usage: $0    (file)*"
    echo "example: $0 server1 server2 /usr/bin/myapp"
    exit 1
fi

read -s -p "Enter Password: " sudopassword
echo ""
temp1=$(mktemp)
temp2=$(mktemp)
(echo "$sudopassword";echo "$sudopassword"|ssh $from sudo -S tar c -P -C / $files 2>$temp1)|ssh $to sudo -S tar x -v -P -C / 2>$temp2
sourceres=${PIPESTATUS[0]}
if [ $? -ne 0 -o $sourceres -ne 0 ]
then
    echo "Failure!" >&2
    echo "$from output:" >&2
    cat $temp1 >&2
    echo "" >&2
    echo "$to output:" >&2
    cat $temp2 >&2
    res=1
fi

rm $temp1 $temp2
exit $res
  • Welcome to Ask Ubuntu. Could you please include the script in your answer? I know it is unlikely but if the github repo was ever removed or the url changed then the answer would be void. It is better to include the script directly and leave the github repo as a source. – Michael Lindman Jul 01 '15 at 15:25
0

An older question, I know, but times change and so do some techniques. Just in case someone is still looking for a streamlined way to accomplish this.

Assumptions

  1. Your user on the server is a sudoers.
  2. You are running Windows 10.
  3. The files in /var/www should belong to user:group www-data:www-data

Concept

The concept is to combine remote commands over ssh and scp file transfersers without relying on GUIs such as PuTTy or WinSCP. These commands can be run either from a Command Prompt or PowerShell. There are five main tasks to permorm:

  1. Environment setup
  2. Transfer files to server
  3. Set remote file permissions
  4. Transfer between remote folders
  5. Cleanup

Tasks 3-5 can be performed in a single step. If you plan to do this often, leaving the environment setup will allow you to omit tasks 1 and 5.

Environment Setup

You may or may not already have folder you can use as a temporary repository for the transfer. If not, you can run:

ssh user@server.tld "mkdir ~/wwwtemp"

Depending on your server's settings, you may or may not be prompted for user's password/passphrase to authenticate the ssh session.

Once the session is authenticated, the mkdir ~/wwwtemp command will execute then the ssh session will terminate, and you will be back at your prompt (Command Prompt or PowerShell).

Transfer Files to Server

The next thing to do is to transfer the files from the local Windows machine to the Ubuntu server using scp like so:

scp -R local\path user@server.tld:~/wwwtemp/

Depending on your server's authentication method, you may or not need to enter a password/passphrase.

Permissions and Final Destination of Files

Once the file transfer has completed, you can run a series of commands over ssh like thusly:

ssh -t user@server.tld "sudo chown -R www-data:www-data ~/wwwtemp  && sudo mv -R ~/wwwtemp/* /var/www/ && sudo rmdir ~/wwwtemp"

Again, depending on the authentication method of your server you may or may not be prompted for a password/passphrase. Regardless of your authentication method, sudo will prompt you for user's password. Unless, of course you have disable requirement for password when user runs chown mv and rmdir. See this question for guidance on how to do that.

This step covers tasks 3-5:

  1. sudo chown -R www-data:www-data ~/wwwtemp recursively sets the desired file permissions on the files you just uploaded.
  2. sudo mv -R ~/wwwtemp/* /var/www/ recursively moves the contents of the temporary repository to its final destination.
  3. sudo rmdir ~/wwwtemp removes the temporary repository. It is necessary to use sudo here since we changed the directory owner in task 3.

Of course, && separates each command. The commands will be performed in sequence. If you plan to keep the repository wwwtemp, you can omit the final command in the sequence.

Notes

You can omit && sudo rmdir ~/wwwtemp from the end of the final ssh command string if you would like to continue using the temporary repository in future. Doing so also means that you can omit the first ssh command each time you desire to transfer files to your server in this manner.

0
  1. In the remote machine, create a folder inside the desired folder, like this:

    cd /var/www
    sudo mkdir my_folder
    
  2. Change the ownership of my_folder to your own user:

    sudo chown -R replace_with_your_user:replace_with_a_user_group my_folder/ 
    

    This will recursively change the ownership inside my_folder.

  3. From your local machine, transfer the file you want to the new created folder:

    scp /path/to/your/file replace_with_your_user@replace_with_the_remote_machine_ip:/var/www/my_folder
    
  4. When the transfer is done, you may want to move your file to the desired location in the remote machine and delete the created folder. In this case:

    sudo mv my_folder/replace_with_your_file /var/www
    sudo rm -Rf my_folder/
    
  • My answer will protect your machine from security issues, like setting the permission of /var/www to 777. Don´t do this. – Hosana Gomes Mar 18 '24 at 18:05
-2
$ scp -i example.pem -r sourcefilr.txt ubuntu@10.12.3.4:/example_folder

Before executing this command we need to give full permissions to example_folder:

$ sudo chmod 777 example_folder
ThunderBird
  • 1,955