Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn’t go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the “public-facing” VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the “public” VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the “public” VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the “personal” OS and b) a direct connection to the switch (and ISP) when running the “public/hosting” OS.

For increased security, each OS would be installed on a separate NVMe drive, and the “personal” one would use TPM with full disk encryption to protect sensitive data. If the “public/hosting” system were compromised.

The theory here is that, if properly done, the TPM doesn’t release the keys to decrypt the “personal” disk OS when the user is booted into the “public/hosting” OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What’s your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let’s discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    What’s your concern here?

    Like who are you envisioning trying to hack you, and why?

    Because frankly, properly configured and permissioned (that is, stop using root for everything you run) container isolation is probably good enough for anything that’s not a nation state (barring some sort of issue with your container platform and it having an escape), and if it is a nation state you’re fucked anyways.

    But more to your direct question: I actually use dns scopes and nginx acls to seperate public from private. I have a *.public and a *.private cname which points to either my external or internal IP, and ACLs in the nginx site configuration to scope where access is allowed.

    You can’t access a *.private host outside the network, but can access either from inside it, and so (again, barring nginx having an oopsie somewhere) it’s reasonably secure and not accessible, and leaves a very clear set of logs (and I’m pulling those logs in and parsing them for anything suspicious and doing automated alerting if I find anything I would not otherwise expect) so I’m happy enough with the level of security that this is, when paired with the services built-in authentication options.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      What’s your concern here?

      No specific concern, I do like in scenario 2, option B. I was just listing the most common options and getting feedback on what others think about those.

      I personally believe the setup 2B is more than enough if a nation state isn’t after you, but who knows? :)

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Then the correct answer is ‘the one you won’t screw up’, honestly.

        I’m a KISS proponent with security for most things, and uh, the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later.

        Pick the one that makes sense, is easy for you to deploy and maintain, and won’t end up being so much of a hinderance you start making edge-case exceptions because those are the things that will 100% bite you in the ass later.

        Seen so many people turn off a firewall or enable port forwarding or set a weak password or change permissions to something too permissive and just end up getting owned that have otherwise sane, if maybe over-complicated, security designs and do actually know what they’re doing, but just getting burned by wandering off from standards because what they implemented originally ends up being a pain to deal with in day-to-day use.

        So yeah, figure out your concerns, figure out what you’re willing to tolerate in terms of inconvenience and maintenance, and then make sure you don’t ever deviate from there without stopping and taking a good look at what you’re doing, what could happen if you do it, and coming up with a worst-case scenario first.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later. (…) Pick the one that makes sense, is easy for you to deploy and maintain

          This is an interesting piece of advice.

          Anyway maybe I wasn’t clear enough, I’m not looking to pick a setup, I’ve been doing 2.B. for a very long time and I do work on tech and know my way around. Just gauging what others are doing and maybe find a few blind spots :).

          Thanks.