2

I am trying to create some soft of a sandbox for linux systems (ubuntu). My main goal is to find out which files are executed by a bash script of my choice, without actually letting it run them. I also want to prevent changes to the system, so the running script will think he has the ability to write to files but it actually doesn't. I don't want to run the bash script under low permissions, because them it will fail to run if it tries to change something. Please don't suggest running it through a virtual machince, it's too slow for me.

The only thing that comes to my mind is hooking any write syscall so when it tries to write to a file, the system will return SUCCESS but do nothing. Also hooking any execution syscall to capture all programs executed by the script and prevent executing other files while returning success to the script. But I have to clue on how to do this.

Any ideas? Thanks in advance.

Shtut
  • 23

2 Answers2

5

Try using an overlay, with a chroot. First, decide the path you want to chroot to, and make sure it exists, and similarly for the path you will overlay on / (which is where modifications will go):

mkdir -p /chroot
mkdir -p /tmp/tmproot

I chose a directory in /tmp/ as it's a tmpfs on my system (possibly unadvisable, but OK for me), so no changes should reach the disk. You can use a squashfs and mount it somewhere, and use that as the overlay, but that has the problem of being read-only, I think.

Now:

$ mount -t overlayfs -o lowerdir=/,upperdir=/tmp/tmproot overlayfs /chroot/
$ chroot /chroot/ /bin/bash -l
root:/$ touch test
root:/$ ls
...  sys  test  tmp  ...
root:/$ logout
$ ls /
...  sys  tmp  ...
$ ls /tmp/tmproot/
root  test

If you make the upperdir independent of a physical disk (perhaps by using tmpfs), this should protect the lowerdir.

Note the creation of a root folder - that's for my .bash_history. A copy was made of the original .bash_history, and then appended to.

muru
  • 197,895
  • 55
  • 485
  • 740
  • 1
    +1 This is a really great idea! It circumvents the problem I described earlier; it is somewhat similar in spirit to the monitor I suggested, but much more elegant since the modified files are stored in /tmp/tmproot, i.e. there is an abstraction layer (overlayfs) that takes care of managing the modifications without affecting the actual root filesystem. Thus it's even possible to run a script that pseudo-modifies a file, then much later run a script which uses that file, as the modified files are not deleted. Note: you have to first create /tmp/tmproot and /chroot for the above to work. – Malte Skoruppa Nov 12 '14 at 14:22
  • @MalteSkoruppa Thanks. I had /tmp/tmproot and /chroot left over from earlier experiments, so forgot to mention that they should be created. – muru Nov 12 '14 at 17:55
  • 1
    If you combine this with strace you can also trace all system calls and their parameters. – David Foerster Nov 12 '14 at 18:45
2

Here's an impossibility result: The behavior of a script may depend on information that it wrote to a file earlier. If you don't actually allow it to write to the file (but make the script believe it did write to the file), then you may influence the script's behavior in a way that wouldn't happen if you ran it "for real".

For instance:

#!/bin/bash

write_bit_to_file () {
    # write a random bit to a file
    echo $((RANDOM % 2)) >> file.txt
}

get_bit_from_file () {
    # read the random bit from the file
    tail -1 file.txt
}

# MAIN SCRIPT
#############

# ... do some stuff, save some info for later ...
write_bit_to_file
# ... do more stuff, retrieve info from file ...
if (( $(get_bit_from_file) )); then
    # access foo.txt and do something
    echo "I'm going to access foo.txt"
else
    # access bar.txt and do something
    echo "I'm going to access bar.txt"
fi

This is obviously a very artificial script, but I hope you get the point: if the script does not actually write to file.txt, but believes it does, you will either get an error (e.g., if file.txt does not exist) or unexpected behavior (e.g., if file.txt does exist, but contains other information than expected, because write_bit_to_file did not actually write to the file). If you ran the script for real, however, it would behave as expected (e.g., randomly access either foo.txt or bar.txt).

What may be possible is to write a monitor that executes your script, observes the files it writes to, makes a backup of these files before that happens, and at the end of the script restores those files to their original state. But that would be pretty nasty ;)

Edit: In this answer, muru suggests a really good way to implement something akin to this monitor, and even better, since it never affects your actual root filesystem and preserves modification of files done by scripts to be re-used later. That's the way to go! :-)

Malte Skoruppa
  • 13,196
  • 5
  • 57
  • 65