While the other answers are much better, handling almost all possible scenarios in the "approved" way (read: using systemd
in the way it was designed to be used), for the sake of completeness, here is my own approach, which was devised at a time I wasn't even aware that systemctl
would also work with any user, not just on boot (and as root).
Assuming the following:
- You've set up IPFS with a user specifically created for the purpose, usually
ipfs
(as described in your OP);
ipfs
has its home directory (which doubles as the working directory for the IPFS dæmon) set to/data/ipfs/
(to clarify your own example).
then you can just have a 'regular' system-wide configuration file at /etc/systemd/system/ipfs.service
with the following contents (make sure all paths are absolute!):
[Unit]
Description=InterPlanetary File System (IPFS)
After=syslog.target
After=network.target
# Uncomment the following line if you use nginx as your reverse proxy to IPFS:
# After=nginx.service
[Service]
Type=simple
User=ipfs
Group=ipfs
WorkingDirectory=/data/ipfs/
ExecStart=/usr/local/bin/ipfs daemon
Restart=always
IPFS usually consumes all resources it can grab; try to get it to play nicer with the rest of the system:
Nice=19
Environment=USER=ipfs HOME=/data/ipfs/
[Install]
WantedBy=multi-user.target
The "trick" to allow IPFS run under the ipfs:ipfs
UID/GID is just to make it explicit on the ipfs.service
configuration file. This is the same that is done for some services (if you have installed Redis, check its configuration on /etc/systemd/system/redis.service
, for instance).
Now, the above configuration presumes that there is no "real" user logging in with ipfs
— it's purely a "maintenance" user, such as mail
, postfix
, bind
, etc. etc. "Real" in this context means: "someone who logs in on the console or via a remote SSH connection and thus requires a full working environment"; personally — and even more so with IPFS! — I would totally block the ipfs
user from existing outside /etc/passwd
(set to a nologin
shell), and the above configuration file should suffice to allow that.
There is a moderately good reason for having a "functional" ipfs
user, though; it would be the one used to get the Go source code and compile it locally, for instance, therefore guaranteeing that everything IPFS-related — not only the data, but even the code itself! — is completely isolated inside the ipfs
user's home directory.
In that case, it's highly likely that you won't have the ipfs
binary on /usr/local/bin/ipfs
but more likely somewhere in ~ipfs/go/bin
or wherever you install the compiled binary. Just remember to make sure to use absolute paths for it!
It's worth repeating that, under such a scenario, the other answers give much better suggestions on how to configure systemd
properly under a specific (unprivileged) user. However, in some cases, a mixed approach may be desirable, e.g. blocking the user ipfs
from managing systemd
directly for its own purposes, but still allowing it to remotely access files on the data directory or even locally compiling IPFS from the sources. What I do in this scenario is to have the ipfs.service
located somewhere inside the ipfs
home directory (in my case, I avoid using the same location as the user-level systemd
configuration, or else everything becomes a complete mess to manage :) ), e.g. ~ipfs/systemd/ipfs.service
.
Then, from a sudoer
, I just do a link from the "main" systemd
to the local unit file:
sudo systemctl link ~ipfs/systemd/ipfs.service
sudo systemctl enable ipfs
and then...
sudo systemctl start ipfs
sudo journalctl -u ipfs -f
... etc ...
Because the ipfs
user can change the local file — even if it's under the control of the 'main' systemd
and not the user-specific systemd
, perhaps it's worth mentioning to remember to do a sudo systemctl daemon-reload
every time you change the ipfs.service
file using the ipfs
user.
Although I'm well aware that all the above is not a good way of doing things, it does work, and I've used it extensively when working on projects contained inside a single user's home directory (as opposed to be under, say, /usr/share/...
or /usr/local/...
) where, for the sake of expedience, I edit the unit service file locally, and then just upload everything to the remote server running those services, but using the unprivileged user (and thus avoiding the need to have SSH allow the root account logging in remotely, even if it's just via SFTP). It also means that any collaborators working just on that project will not require access to any other user but the unprivileged one (so long as someone does the sudo systemctl daemon-reload
on their behalf).
Finally, note that, since in this scenario the unprivileged user might not have a shell assigned to them (for security reasons), that means that all the whole .profile
or .bashrc
magic might not be available to them — all for the sake of limiting the attack surface for potentially malicious hackers. In my personal case, I have some 20+ projects running under that scenario, each with its own user, none of which having a shell account, much less a password; authentication is solely made via SSH certificates to those accounts, which will only have permission to use SFTP anyway.
As a side note, all these projects I've got — not by coincidence — are written in either PHP (which runs everywhere) or in Go, since the latter is probably the easiest cross-compiler around for a strongly-typed programming language which fully compiles code to native binaries for any architecture. Project developers just cross-compile locally (if they're not using Ubuntu on their desktops, that is; most aren't) and upload server-native executables to that specific user. The rest is up to systemd
to figure out.
I hope that all the above might be useful for someone searching on Google for an answer to this question :)
sleep 1
andXDG_RUNTIME_DIR=/run/user/$UID
. If you are running as root you could also try
systemd-run --quiet --machine= $USER_ACCOUNT@ --user --collect --pipe --wait systemctl --user enable ipfs` – Erik Sjölund Jan 15 '22 at 10:28