Wondering if anyone has a feel for the power efficiency of older server hardware. I’m reading posts from people who say they have R710 with lots of hard drive and it IDLES at 160W with 8 hard drives. So…if you take the hard drives out of the equation, it’s probably still like 120W. Is that just how inefficient old computers are? Kinda like incandescent bulbs are less efficient than LED bulbs? How efficient is the R730 compared to the R710?
My 6 year old desktop computer is 60W idle with a GPU, and 30W idle without the GPU. Seems like a huge difference. It’s like $70 more per year to run a R710 than my old desktop with a GPU. Is that correct?
You have to understand the priorities of the rack server market.
#1 is dependability. It needs to keep running no matter what. Evenrthing is built around overbuilding it. More cooling, dual CPUs, Dual power supplies, lots of drives in RAID…
#2 is size. Colo space is expensive! So keep it small. So everything is densely packed, which is bad for airflow. And you get stacked small fans running at the speed of sound.
#3 is performance. Yeah, you would think it was first, but it ain’t. But that means 10k and 12k spinning drives. These are loud and noisy!
Way down the list is power… When you consider the cost of the hardware new, the cost of the colo space, and the cost of the people maintaining it, the power cost is next to nothing. The only thing less important than power consumption is sound which is not even on the list…
Now, compare that with workstations. They have a lot of the same components like Xenon CPUs, lots of ram, raid… But they sit on a desk, so noise, heat and power are a real concern. And they are often overlooked in the used and refurb market. So for less money, you get server like components and performance, in a quieter and more power friendly form factor.
If you are looking for power efficiency, stay away from rack mount servers. What’s wrong with using your old desktop computer?
What are your needs for a lab server?
My R720 with 12 SAS drives idled at around 225W.
I’ve cut down the number of spinning rust to 6, and added 4 SSDs, now it idles around 120W.
I also enabled power capping in the BIOS at 250W. 2x Xeon v2, forgot which model, and 128GiB RAM.
I need to do that as well. My sas drives idle at 7.7 watts and under load run at 11 watts a pop.
The 730 will be more efficient based on improvements to the chip design but also DDR4 uses less power than DDR3. There’s also increase in efficiency when with the v4s over the 3s
The newer chips improve efficiency in two areas. a) what they do per clock cycle (IPC) and b) power consumption at low load or idle.
Chips like the v3/v4 in the 13th gen do lag on IPC but also on pure clock speed but for most homelab servers that doesn’t matter because they’re not heavily loaded. It’s the power consumption for idle where they get crucified and yes the Xeon will use a lower more power to tick over.
There could also be a design factor there. It’s okay if your desktop goes into a deep sleep to save power but for a server responsiveness is a big factor and you don’t know when there’s going to be access so going in really low power mode isn’t a good thing because the time to come back to full power could be critical.
if you search for online power consumption caculator such as one used for sizing a a UPS you can play with different configurations and see what numbers you come up with.
And yes easily $70 more to run the 710.
But there can also be other factors. Each of the those HDDs can consume between 5 and 10w depending if they’re idle or in use. Each memory module will use 2 - 3w so if you’ve 8 x 8GB in the Dell server that’s 24w, where as say 4 x 16GB would use half that.
Your Dell also has iDrac so even if the server is shutdown, unless it’s unplugged you’ll still draw some power.
And speaking of power, the PSU effeciency has greatly improved over time. Not sure what standard PSUs Dell provided with the 710 and whether they improved them to 730 but there’s a good chance your desktop has a more efficient unit.
Great explanation. Thank you.
80+ gold for the 570w and 80+ silver for the 870w option. On the R710 that is.
I guess I’m the only one who only powers on their dell r730 when I’m trying to learn something new.
My server runs a LOT of things. Including numerous crypto nodes. So it is not a total loss.
Plus I do love all the room for activities and use it constantly for making videos on YouTube.
Trying to wrangle in my R730XD now, have it fully populated with 12 HDDs. Did a downgrade to a more efficient CPU but it didn’t help much at idle. Replacing the 750w PSU for the 495W one did.
CPU does help during load though… running 2x E5-2630Ls.
When the disks spin down it idles a lot less so I just ordered 2 SSDs for the flex bay to migrate noisy applications over. It’s mostly a Plex/file server
I will give you an interesting comparison in my opinion. I have an HP DL380 G7 128GB RAM server and a 500GB SSD. I took out one CPU for lower power consumption, at idle I have 80W. When I run two servers minecraft, plexa, XPenology(NAS), WIN11, HomeAssistant then after loading everything the server draws 85-90W. All in Proxmox. For comparison I moved all the machines to a Dell Optiplex with an i5-6500 and 32GB RAM, here the CPU goes with a load of 85% and the computer drew 95Watt all the time with the same machines. You have to answer whether the server is going to do anything or not. If you’re going to be running something from time to time then a desktop PC is better, but if there’s going to be something running all the time then a server is better. Of course, I am not talking about a file server, because then it is not worth it.
I used a r710 when I first started homelabbing. The cost after six months of just powering it was equivalent to a completely separate server upgrade. Even a r720 is a massive improvement over the 710. Do yourself a favor spend a little more to start and your future self will thank you.
This. I covered that iron and now I can’t wait to retire it!
I put 6 SSDs in mine and it sits around 170W doing absolutely nothing. I got it for free. I wouldn’t spend $20 on one now.
I replaced it with an AliExpress “4 port, 2.5GB Router” Intel N6000 Supports 64GB RAM 4 port 2.5GB NVME
The storage isn’t record setting, its a single PCIE channel but I’ll be damned if I couldn’t run 4-8 basic VMs on it.
$225 after adding 32GB RAM and a 1TB drive
Just to give you an idea, I’ve just built myself an AMD Ryzen 9 7900(not X) with: P600 gpu, SAS controller, SAS Expander, Intel x710 10gig card, Samsung 3.84Tbyte u.2 nvme PCIe 4.0 drive (scratch space), Samsung 2tbyte 980 pro PCIe 4.0 (OS disk), 64gbytes of ddr5 ram.
It idles at ~100W, though I am working on reducing that. But the power/performance ratio is nuts compared to those old ‘clunkers’ like the r710.
I still have to migrate my storage drives from my old NAS, but those are 870 QVO drives (8tb) that idle at 30mw, so I don’t expect much extra idle usage from those.
I have an r930.
With 4x e7-8890 v4’s. 9ssd’s 2x SAS drives 4x m.2 drives on a pcie card 512gb of ram across 32 ram sticks
Pulls about 400w like all the time even at idle.
Holy cow. What’s driving half of that wattage? Is it the 32 sticks of ram? Or the 4 cpu?
Your server is 75% of my entire house power, including my server.
Well, a DDR4 RAM stick would use 2-4W, so 32 sticks is 64-128W alone. 4 CPUs don’t help either. :)
Given it’s only 512MB, it could be achieved with just 8 64GB modules, which would save a bit of power, but that’s also a lot of money to put into what is after all somewhat obsolete hardware. (And I say that running a v4 Xeon as well :P)
I forgot there is also a gtx 1650 in there as well.
But honestly. I’m fairly sure the majority of the power draw is the 4 CPU’s.
96 cores and 192 threads on older architectures was a bit of a power suck. If I had it all to do over again I would for sure have gotten an epyc chip instead.
Not only does mine idle at ~160W, but when it is ‘off’, the iDRAC is still running … and drawing around 20W.
Further be aware that PCIe devices also need to support sleepstates. So if you add an old enterprise card liek 10g to your server/machine, then you might idle at twice or more times the power it would otherwise do as that device prevents lower sleepstates.
so the cheapest 10g card or hba is not the best in the long run
Out of curiosity, what’s your desktop setup? Particularly motherboard, CPU, RAM, PSU model and capacity? 30W idle without a GPU is exceptionally good for a 6 year old PC.
Nah. My old server from 2012 did 35W. Dual core i5, 16GB, 120GB SSD, 3x 3TB HDDs.
Nah. My old server from 2012 did 35W. Dual core i5, 16GB, 120GB SSD, 3x 3TB HDDs.
Anyway, I’m trying to understand this whole power consumption thing too. See my other post if you’re interested. And my conclusion is that it just doesn’t make sense. There are a lot of factors involved. Nuanced things, like quality and of components, firmware etc. You can’t just say old == bad power consumption, new == good consumption. Your 2012 server is a good example of that.
In my opinion, the best thing to do is just look up the specific model you intend to get and check what people’s experiences are for that particular model. And there is your expected power consumption.