In that case, just copy someone else’s homework. Look up what Supermicro is using for wattage in 1U non-GPU servers, and use those numbers.
In that case, just copy someone else’s homework. Look up what Supermicro is using for wattage in 1U non-GPU servers, and use those numbers.
You’ll definitely need something with fast PCIe lanes for NVMe. Something with either PCIe 4.0 x4 coupled with a very fast SSD, or something with a lot of PCIe 3.0 lanes.
Is your RAM on the QVL? Ryzen’s notorious pickiness about RAM carries over to TR and EPYC, too. One of the first things before POST and BIOS splash display is memory training. If it can’t get past that, something about memory needs adjustment. Have you tried downclocking it?
How close do you want to get? Budgeting about 200W per socket for “normal”-ish CPUs and 400-450W per socket for latest EPYC should get you in the right range.
I’ve had drive failures bring down entire systems. Replace sda
and see if the problems continue.
E3-1230v2 is completely different to an E5v2 or an E5v3. The E3’s were all 4-core dies. E5’s were built from two different dies, both of which had well more than 4 cores. The chipset is different between E3 and E5, the memory controller is different, and the PCIe lane count is different. You can’t directly compare an E3 to an E5.
Idle power can be estimated (CPU+RAM+chipset+drives+GPU), but is also majorly affected by:
Your measured power consumption of 100W at low/no load is about what I’d expect. Can you reach lower? Maybe with the right combination of settings, and switching to slower/lower voltage memory, and making sure that the GPU is also throttling down, you could reach 65-80W idle. But I wouldn’t expect less than that.
Tried a different cable, just in case you’re capping out at 100Mbps?
Which protocol are you using to test file transfer? If it’s anything involving encryption, that CPU will be hurt real bad, with no AES-NE.
Xeon E3’s are unbuffered DIMMs only.
Based on chip family, this is the only one: https://www.gigabyte.com/Enterprise/Server-Motherboard/MX33-BS0-rev-1x
Get an ATX case with lots of drive bays, a SAS HBA (and SAS expander if you want lots of drives), cheap mobo/CPU/RAM and OS of your choice. Spin it up, play with it, settle on an OS you like, and upgrade components as needed.
OPNsense, vyos, pfSense, TNSR. TNSR is extremely fast at routing, with some stringent hardware requirements. vyos is Linux-based and very fast at routing virtualized in KVM. The *senses are FreeBSD-based and have their quirks, but if all of your routing is ~Gbit symmetrical, you should be fine.
Your rails are too long/your rack is too shallow. You either need rails designed for a shallower rack, or to somehow shorten the outer rails to fit your existing rack.
USB by its very nature requires a host device. The connection method you described (PC—>DP—>DP/USBC—>Display) I can only assume works because the display’s USB-C port switches to DP-Alt mode, functioning solely as a DP input port. In this case, there is no host device.
USB-C hubs/docking stations, which provide many ports (HDMI, Ethernet, etc), require that any display signal be transmitted as data on the USB bus. In this case, DP-Alt mode cannot be used, and the PC is the host device. It goes without saying that display images are very bandwidth-intensive, and when using a such a hub, you want to maximize the upstream bandwidth. 5Gbps hubs are OK, 10Gbps are better, 40Gbps (USB4) is optimal.
YMMV on a setup that goes something like:
PC—>USBC—>KVM—>USBC—>Display
or
PC—>USBC—>KVM—>USBC hub—>DP/HDMI—>Display
I don’t know if a KVM knows how to handle this kind of situation. What does your USB switch advertise as capabilities?
DP to USB-C adapter probably uses DP-alt mode and not true USB-C data transfer.
You could desolder it and solder a new one on, or possibly even solder one on top of the existing LED. Same as replacing any other on-board component.