13

I have a script that is running as root and am trying to have it setup a user service to run an IPFS daemon. The problem is that I need to enable the service as the user rather than as root. It usually works after a restart but I'd like to avoid that if I can.

The service script is located at ~/.config/systemd/user/ipfs.service

It contains:

[Unit]
Description=IPFS daemon

[Service]

Environment="IPFS_PATH=/data/ipfs" # optional path to ipfs init directory if not default ($HOME/.ipfs)

ExecStart=/usr/local/bin/ipfs daemon Restart=on-failure

[Install] WantedBy=default.target

(I took this code from here: https://github.com/ipfs/go-ipfs/tree/master/misc )

If I run these command as the user it works correctly:

systemctl --user enable ipfs
systemctl --user start ipfs

The problem is that my script is running as root, and I can't figure out how to get this to run as the user. I have tried this so far:

    # Enable linger so IPFS can run at boot
    loginctl enable-linger $USER_ACCOUNT
# Enable the service to run at boot
sudo -u $USER_ACCOUNT systemctl --user enable ipfs

# Start the service now
sudo -u $USER_ACCOUNT systemctl --user start ipfs

Unfortunately with this the service does not start and when I get this error message:

Failed to connect to bus: $DBUS_SESSION_BUS_ADDRESS and $XDG_RUNTIME_DIR not defined (consider using --machine=@.host --user to connect to bus of other user)

Once the script has finished, and I reboot, the service starts fine but I would like to avoid the user having to reboot if I can. Any help would be appreciated.

  • Does it matter where the script runs? If the answer is no, try to run from another TTY. Sometimes, this will work when it seems like it shouldn't. SystemD should be available from anywhere. It is a login shell, so the env. requirements will be different. Honestly, I dont know if it will work or not, but Im curious. – Nate T Nov 08 '21 at 18:47
  • In https://github.com/eriksjolund/user-systemd-service-actions-workflow/blob/main/.github/workflows/demo.yml#L18 I had to add a sleep 1 and XDG_RUNTIME_DIR=/run/user/$UID. If you are running as root you could also trysystemd-run --quiet --machine= $USER_ACCOUNT@ --user --collect --pipe --wait systemctl --user enable ipfs` – Erik Sjölund Jan 15 '22 at 10:28

4 Answers4

7

The quick solution

Assuming someuser uses bash as their login shell, add the following exports to ~someuser/.profile [1]:

export XDG_RUNTIME_DIR="/run/user/$UID"
export DBUS_SESSION_BUS_ADDRESS="unix:path=${XDG_RUNTIME_DIR}/bus"

Then, a user with root/sudo privileges can interact with someuser's systemd by wrapping the command with runuser:

sudo runuser someuser -l -c "systemctl --user enable ipfs"
sudo runuser someuser -l -c "systemctl --user start ipfs"

runuser someuser -l -c "printenv" can help to troubleshoot these and other exported environment variables.


[1]: putting these exports into ~someuser/.bashrc instead may turn out ineffective, as the default .bashrc often is set up to early-exit when it detects it is running non-interactively; see https://unix.stackexchange.com/a/257613/20230

The clean solution

In general, setting XDG_RUNTIME_DIR and DBUS_SESSION_BUS_ADDRESS manually/explicitly is more of a workaround; what is preferable to it is to properly log in with user someuser. Properly logging in will automatically set these environment variables, and avoids pollution of someuser's environment variables with settings from the originating user.

Such a proper login can be accomplished with the machinectl command, which e.g. for Ubuntu 22.04 LTS jammy is available via apt package systemd-container.

With machinectl available, a proper login from any user to user someuser into their shell, with all the environment variables properly set up:

sudo machinectl shell someuser@
Abdull
  • 502
  • 1
    Thanks for the explanation! I didn't have machinectl installed on 22.04 — but that was quickly solved with apt install systemd-container, of course. As I used that command to login with a (purely internal; not externally accessible) user, it instantly filled in the two env variables with the correct info, as well as making sure /run/user/<uid> was properly created. systemctl --user list-units immediately started to work, too! Just a tiny detail: for consistency's sake, your last command should be shown as sudo machinectl shell someuser@ instead of the lxc@ which baffled me! – Gwyneth Llewelyn Sep 30 '23 at 11:09
  • thanks, I fixed it now. – Abdull Oct 01 '23 at 09:19
  • thanks @Abdull for this solution. question - do you know if machinectl is typically the only way these env vars are set? or are they set automatically in certain linux shell sessions (e.g. login, interactive, etc)? – william_grisaitis Oct 02 '23 at 19:35
  • 2
    @william_grisaitis, so far machinectl shell ... is the only way I found for automatically initializing these env vars. https://unix.stackexchange.com/a/477049/20230 , https://github.com/systemd/systemd/issues/825#issuecomment-127917622 , and https://www.reddit.com/r/linuxadmin/comments/rxrczr/in_interesting_tidbit_i_just_learned_about_the/ provide some insights about the rationale of why su(do) does not set these env vars. It boils down to security reasons, as su's historical purpose, scope, and intent has become too vague for the modern concepts introduced by systemd and others. – Abdull Oct 23 '23 at 11:24
  • Ah huh. Thanks for sharing! – william_grisaitis Oct 23 '23 at 17:33
  • If you can ssh into the user account (as opposed to a local su), the variables are also correctly set and the issue goes away without the need for a workaround. – sxc731 Jan 12 '24 at 09:27
3

These variables are user specific. They are set by the user instance of systemd when a user logs in. If your script runs as a system service then of course it does not have access to this variable for a specific user.

If you use systemctl --user --global enable then the service will be enabled for all users.

  • 1
    systemd isnt just a binary in the bin directory. It is a literal system. There aren't user specificic instances. – Nate T Nov 08 '21 at 19:00
  • 3
    There are two systemd processes. The first one is run as /sbin/init and starts global services. The second one is launched when a user logs in and starts user services. The two variables in question here are set by the second process. – user2249675 Nov 08 '21 at 21:11
1

I faced this issue after installing docker-desktop following https://docs.docker.com/desktop/install/ubuntu/ on Ubuntu 22.04 LTS.

Following solved the issue. First check the permissions on the downloaded .deb file in your 'Downloads' directory or just set it:

sudo chmod 755 docker-desktop-<version>-<arch>.deb

Next add your user to docker group:

sudo usermod -aG docker $USER

Activate changes to the group:

newgrp docker

Now try executing command to launch docker-desktop:

systemctl --user start docker-desktop

To fix the docker-hub login issue from docker-desktop:

https://docs.docker.com/desktop/get-started/#credentials-management-for-linux-users

Ref: https://docs.docker.com/engine/install/linux-postinstall/

Codistan
  • 213
0

While the other answers are much better, handling almost all possible scenarios in the "approved" way (read: using systemd in the way it was designed to be used), for the sake of completeness, here is my own approach, which was devised at a time I wasn't even aware that systemctl would also work with any user, not just on boot (and as root).

Assuming the following:

  1. You've set up IPFS with a user specifically created for the purpose, usually ipfs (as described in your OP);
  2. ipfs has its home directory (which doubles as the working directory for the IPFS dæmon) set to/data/ipfs/ (to clarify your own example).

then you can just have a 'regular' system-wide configuration file at /etc/systemd/system/ipfs.service with the following contents (make sure all paths are absolute!):

[Unit]
Description=InterPlanetary File System (IPFS)
After=syslog.target
After=network.target
# Uncomment the following line if you use nginx as your reverse proxy to IPFS:
# After=nginx.service

[Service] Type=simple User=ipfs Group=ipfs WorkingDirectory=/data/ipfs/ ExecStart=/usr/local/bin/ipfs daemon Restart=always

IPFS usually consumes all resources it can grab; try to get it to play nicer with the rest of the system:

Nice=19 Environment=USER=ipfs HOME=/data/ipfs/

[Install] WantedBy=multi-user.target

The "trick" to allow IPFS run under the ipfs:ipfs UID/GID is just to make it explicit on the ipfs.service configuration file. This is the same that is done for some services (if you have installed Redis, check its configuration on /etc/systemd/system/redis.service, for instance).

Now, the above configuration presumes that there is no "real" user logging in with ipfs — it's purely a "maintenance" user, such as mail, postfix, bind, etc. etc. "Real" in this context means: "someone who logs in on the console or via a remote SSH connection and thus requires a full working environment"; personally — and even more so with IPFS! — I would totally block the ipfs user from existing outside /etc/passwd (set to a nologin shell), and the above configuration file should suffice to allow that.

There is a moderately good reason for having a "functional" ipfs user, though; it would be the one used to get the Go source code and compile it locally, for instance, therefore guaranteeing that everything IPFS-related — not only the data, but even the code itself! — is completely isolated inside the ipfs user's home directory.

In that case, it's highly likely that you won't have the ipfs binary on /usr/local/bin/ipfs but more likely somewhere in ~ipfs/go/bin or wherever you install the compiled binary. Just remember to make sure to use absolute paths for it!

It's worth repeating that, under such a scenario, the other answers give much better suggestions on how to configure systemd properly under a specific (unprivileged) user. However, in some cases, a mixed approach may be desirable, e.g. blocking the user ipfs from managing systemd directly for its own purposes, but still allowing it to remotely access files on the data directory or even locally compiling IPFS from the sources. What I do in this scenario is to have the ipfs.service located somewhere inside the ipfs home directory (in my case, I avoid using the same location as the user-level systemd configuration, or else everything becomes a complete mess to manage :) ), e.g. ~ipfs/systemd/ipfs.service.

Then, from a sudoer, I just do a link from the "main" systemd to the local unit file:

sudo systemctl link ~ipfs/systemd/ipfs.service
sudo systemctl enable ipfs

and then...

sudo systemctl start ipfs
sudo journalctl -u ipfs -f
... etc ...

Because the ipfs user can change the local file — even if it's under the control of the 'main' systemd and not the user-specific systemd, perhaps it's worth mentioning to remember to do a sudo systemctl daemon-reload every time you change the ipfs.service file using the ipfs user.

Although I'm well aware that all the above is not a good way of doing things, it does work, and I've used it extensively when working on projects contained inside a single user's home directory (as opposed to be under, say, /usr/share/... or /usr/local/...) where, for the sake of expedience, I edit the unit service file locally, and then just upload everything to the remote server running those services, but using the unprivileged user (and thus avoiding the need to have SSH allow the root account logging in remotely, even if it's just via SFTP). It also means that any collaborators working just on that project will not require access to any other user but the unprivileged one (so long as someone does the sudo systemctl daemon-reload on their behalf).

Finally, note that, since in this scenario the unprivileged user might not have a shell assigned to them (for security reasons), that means that all the whole .profile or .bashrc magic might not be available to them — all for the sake of limiting the attack surface for potentially malicious hackers. In my personal case, I have some 20+ projects running under that scenario, each with its own user, none of which having a shell account, much less a password; authentication is solely made via SSH certificates to those accounts, which will only have permission to use SFTP anyway.

As a side note, all these projects I've got — not by coincidence — are written in either PHP (which runs everywhere) or in Go, since the latter is probably the easiest cross-compiler around for a strongly-typed programming language which fully compiles code to native binaries for any architecture. Project developers just cross-compile locally (if they're not using Ubuntu on their desktops, that is; most aren't) and upload server-native executables to that specific user. The rest is up to systemd to figure out.

I hope that all the above might be useful for someone searching on Google for an answer to this question :)