Title. Just had this baseless yet possible idea on my head and I’d like to know how wrong it is? Since afaik, “nobody” has absolutely zero permissions… other than the ones given by the user. Pretty sure I’m missing something vital or important, but… I’m completely fine being called dumb every now and then.

Thanks in advance.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    43
    ·
    9 months ago

    Nobody is not a special user like root, it’s a regular user that just happens to not have permissions on anything. It can still read everything, write everywhere it’s 0777, use /tmp. It’s no different than making a new user, except that one by convention isn’t used. You shouldn’t run things as that user, as that eventually just makes it the user that runs everything. It’s supposed to be used by NFS and you should always prefer making a new user instead.

    I would just use a rootless container, that way the whole system is invisible to wine apart from the tiny slice of files you mounted in it.

    • GustavoM@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      For a second I really thought there was “something else” behind the “nobody” user – considering its a “typically suggested” user to be used within docker containers (even in some “known” commands/Dockerfiles). So all I need to do then to create a user with the least amount of privileges is to not give it a home directory and a shell? Eh.

      Thanks a lot nonetheless.

      • Max-P@lemmy.max-p.me
        link
        fedilink
        arrow-up
        12
        ·
        9 months ago

        Yep, pretty much. I mean I guess it makes sense for containers because the only thing running there is that app anyway, although most good containers set up their own user. For example, the node container has a node user with ID 1000 which is more typical for Linux systems. Docker in particular is often used by Windows/Mac users that aren’t necessarily familiar with the ins and outs of Linux and just want it to work, which in a container, is usually fine. With rootless containers, even root in the container is basically useless anyway because it truly runs as a fake ID on the host.

        But yes, even in the Linux circles, there can be dubious advice or just bad habits. The thing with doing things right is it takes a lot of time, and sometimes, you really just want to make it work. Just look at how many people just sudo whatever whenever they get a permission error instead of getting into udev rules or whatever. Especially with the growing interest in Linux thanks in huge part to Valve and Proton, I’ve seen very dubious habits of “just make it work” being ported over from Windows. Those guides are appealing too: one will give you 3 sudo commands, the other will do it proper but then you have a dozen commands or two to set up users and grant permissions and whatnot. The second guide, although the better one, isn’t as appealing as the first one especially when you’re lost, you just want the thing to work, and you’re looking for simplicity.


        For a lot of simple use cases, dropping privileges to nobody is simple and easy, no need to create new users. It’s not horribly insecure and it’s better than root/your own user in terms of blast radius. If there’s just one service occasionally running as nobody, it’s really no big deal, you just shouldn’t run everything as nobody. Plus, people have the tendency of not reevaluating their problem when it grows, so it may start with a simple web server that doesn’t need anything, but then you start needing to do more, and the “run the web server” part is mentally marked as done and final, so people will start granting nobody access to more things instead of taking a step back and reevaluating and upgrading their service to a dedicated user with dedicated privileges. Then that leads to privilege creep for a user that’s supposed to have none.

        The main issue with running more than one thing as nobody is, on Linux, users have the ability to debug their own processes. So maybe your app running as nobody gets exploited, but since you also run NGINX as nobody, now they can hook NGINX and dump TLS certs from it even if the actual files are owned by root and it dropped privileges: it still needs to have it in memory to operate, it loads the certs as root then drops privileges.

        • dack@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          9 months ago

          With rootless containers, even root in the container is basically useless anyway because it truly runs as a fake ID on the host.

          I’ve seen this repeated a lot, but I’m not really convinced running as root inside containers is a good/safe thing to do. User namespaces can provide some protection for the host, but that does nothing for the rest of the files inside the guest. For example, consider a server software with an arbitrary file write vulnerability. If the process is running as a low privilege user, exploiting the vulnerability might not really get you anywhere. If it’s running as root, it’s basically a free pass to root privilege and arbitrary code execution within the container.

          • Max-P@lemmy.max-p.me
            link
            fedilink
            arrow-up
            1
            ·
            9 months ago

            That’s why I mentioned rootless containers specifically. In those, root is at most the user running the container. It can’t do a whole lot, because it’s not really root. Each user in /etc/subuid gets a range of dummy IDs >65535 specifically for containers for that user. When outside the container, everything shows as owned by the user, so root in the container can’t even result in root owned files on the host, so no suid trickery or anything.

            Of course you should still run as a user in the container too, I was just pointing out in rootless containers the blast radius is much reduced because of that feature. Definitely still don’t want root for many other reasons.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    If you’re looking for some way to restrict what a few specific programs can do without going to containers, consider firejail. It will likely do a better job than a home-rolled solution.

  • mvirts@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    4
    ·
    9 months ago

    It’s a great idea, basically the goal of using docker or snaps. Linux containers attempt to strip processes of privilege except what they need to function. You can’t get away with giving zero permissions unless you want to give up saving files and using graphics hardware… Which is kind of necessary for a web browser.

  • chayleaf@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    executable ownership doesn’t matter, what matters is the rights of the user running the binary, and whatever sandboxing you have configured. So use Flatpak or Firejail.

  • steph@lemmy.clueware.org
    link
    fedilink
    arrow-up
    2
    ·
    9 months ago

    A process owned by any user will be able to exploit a userspace vulnerability, whatever this user is. Selinux, chroot, cgroups/containerization add a layer of protection to this, but any vulnerability that bypass these will be as exploitable from nobody as from any other local user. It will protect a user files from some access attempts but will fail to prevent any serious attack. And as usual when it comes to security, a false sense of security is worse than no security at all.

    Remember that some exploits exist that can climb outside of a full-blown virtual machine to the virtualisation host, finding a user escalation vulnerability is even more likely.

    The only real protection is an up-to-date system, sane user behavior and maybe a little bit of paranoia.