So ive never really paid attention to the power I consume running various servers over the years but now that ive cleaned up and consolidated im trying to gauge my power draw compared to others.

I run a Proxmox host with 13 HDDs, 6 NVMe drives and 2 U2 NVME drives, a Quattro P2200, RTX A2000, RTX 4070, Epyc CPU, HBA for HDDs, NVMe Card 4x4.

A Synology 2422 with 4SSD, 2 HDDs

A Synology expansion with 8 HDDs

I run about 500 watts off the wall for all this stuff and I think this is the lower end as I wasn’t using the GPUs. That includes a couple switches as well. Very silent runs very cool.

What do other people consume?

  • Oscarcharliezulu@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    All these comments are making me think about how I’d create the minimum power-use homelab. Was looking at 3 year old servers but now I’m thinking just building a low power but powerful system that uses very low power at idle but when in use I’m less worried as it’s more about getting the job done.

  • TheSoCalledExpert@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I draw about 150 watts at idle.

    1x pve server (ryzen 5, 32GB ram, 2x SSD, 8x HDD)

    1x HP T620+ firewall

    1x rpi2 backup pihole

    1x switch

    1x UniFi AP

    1x spectrum modem

  • anothercorgi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    About 2.9e-7 gigawatts.

    for PVR (1 HDD), server(4 HDDs), and all those wall warts, standalone clocks, switches, CPE, battery chargers I left plugged in, TV and monitor standby power,…

  • mthode@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    looks like a steady 330, single storage host 7 spinning disks, 4 SSDs, 4 rpi-4 with SSDs running k3s and the network stack (edgerouter 8-xg, 2 8 port poe switches and a 24 port es-24).

    Changes I should make are to reduce drives / upgrade storage host in a couple of years and switch out to a single, larger poe switch (2.5G 24-48 ports), again in a couple of years.

  • Firestarter321@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    850 watts is my normal server rack load, however, with cameras and other switches I’m at 1100 watts 24/7 currently.

    Add another 600 watts if I turn everything on in the server rack.

  • TheIlluminate1992@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Running network equipment to include 4 POE cameras, a Unifi UDM Pro, 48 port poe switch, fans and 2 APs.

    On the Server side I run a dell r730xd with 2 x m1200s in standby as I don’t have disks yet and I pull about 300w on avg.

  • audioeptesicus@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    4100 VA or about 2650 W…

    Not including my office setup, that’s just what’s in the rack. MX7000 chassis with 7x MX740c blades, redundant 40G core switches, a fiber channel SAN, two 48-bay NAS with 10TB drives, and 240v power with a 5000W UPS.

    Not including the AC for the garage that the rack is in.

    And no, I am not a masochist.

    • JonohG47@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      How on Zod’s green earth were you able to get your power factor to be that awful?!

      • pseydtonne@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Follow up question: how is your hearing? An actual blade setup would be loud as bombs inside a house.

        • VaguelyInterdasting@alien.topB
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Actually, the MX7000 is not terrible on noise comparatively. Not silent, obviously, but no worse than a typical 1U server.

          Now, having that many compute modules may make that thing loud…

          • audioeptesicus@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            Yep. It’s not so bad. I typically only have 4 or so blades powered on at a time, so it’s not so bad. The MX9116N IOMs I have though require more cooling. Had I gone for the lesser ones, it’d probably be a little quieter.

  • EpicEpyc@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    ~550w Nexus 9k 48p 10g 6p 40g 3x dell r630, 2x 10c e5 2640 v4, 384gb ram, 1x 960gb nvme ssd and 5x 1.92tb sata SSDs

    Though it may change soon… not for the better

  • Fabri91@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Two small Synology units (DS120J and DS218+ with attached usb drives for backup), five-port gigabit switch and a modem router, in addition to a Proxmox host (HP 800 G3 mini) runs at about 40W with spun down hard drives, and somewhere around 50W when these are being accessed.

    The Synology units automatically shut down at night, at which point the power draw drops to 24W.

    It all comes down to about 0.85-0.9 kWh per day.

    Assuming a price of 30c/kWh means that even this comparatively small power use comes up to 10€/month or so.

  • Brilliant_Sound_5565@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    About 50 watts. I downsized a few years ago, got rid of the unnecessary larger servers, moved to Intel nucs and a 4 disk nas for centralized storage. Was no need to run large servers at home, I play with them at work.

    I run proxmox on the nucs with my servers in vms, each nuc has 16gb of ram and performance is fine

  • wireframed_kb@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    My entire rack idles around 160W, which includes switches, router, 3 cameras, 2 hotspots, and a server with a Xeon 2680 v4, 100GB of RAM and 50TB of storage, along with a 1650 Super for transcoding etc.

      • wireframed_kb@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        It’s nothing special. Server is custom, built in an Inter-Tech 4U case with 8 hotswap bays, using an x99 motherboard.

        Networking is Ubiquiti, with a PoE-capable switch to provide power to access points and cameras.

        A big difference was made by making ProxMox use a power plan that lets CPU go idle or clock down, which I think was good for like 20-25 watts. My Windows 10 VM is less responsive in RDP, but otherwise doesn’t seem affected, and the Linux-based VMs don’t seem to care.