So doing a tower build mostly to try and achieve something quieter than what a rack server would offer, didn’t think about CPU coolers until everything arrived so went to local Best buy, kinda digging the dual water cooler aesthetic. Waiting on RAM to arrive still but also have a question.
Currently I have the CPU header split for the radiator fan and pump, is that ok or should I split the system fans with the pump. I guess I could also get a molex pump controller but don’t think I need one.
Iirc the tubes on the radiators should always be at the top
no, any air bubbles trapped in there go to the top.
you can’t bleed an AIO the way you can a custom loop
I’m confused. You built this from the start with the intention of making it quiet, but didn’t think about the CPU cooler until you ordered everything else? Isn’t the CPU cooler typically the primary noisemaker? The choice for water cooling sounds almost made for you already if the whole point is silence.
If you intend to leave that radiator mounted at the back, you at least need to rotate it so the pipes aren’t at the top.
Difficult times for the right pump?
Cheap way to get 56 threads. Wouldn’t an EPYC 7601 32 core beat both of these processors combined though?
Sure, but the EPYC 7601 is infinitely more expensive.
The chip alone is $200 used and a compatible board is $400 minimum.
Meanwhile, you can get a pair of E5 2680 v3 for $15 and the shown board for $70.
My CPUs were $5 each.
I just lurk here for the tech pics, please forgive me: Why does that computer have two CPUs?
Dual-socket systems are an easy way to get more cores, memory size or memory channels for your machine without buying a more expensive CPU SKU, or upgrading the platform to the next generation (which is often still years away back when those systems were operated by their former corporate owners), or building a multi-node cluster. 4-socket and 8-socket systems also exist but uncommon on second-hand market as most motherboards and CPUs do not support this feature.
They are also a good way to get more PCI-E lanes without having to go to HEDT hardware. Standard CPUs tend to have nowhere near enough PCI-E connectivity for modern times, imho.
…Dual-socket systems are almost always either server or HEDT themselves.
They also introduce different issues.
Be sure of what you’re after before you move to dual-socket for PCIE lanes, and how it might effect what you’re already doing.
Most likely for virtualization. Though in this configuration, it would be a challenge to provide the storage and ram to support more than say, 16 cores, or a standard single cpu build.
Why? NVMe drives produce more throughput and IOPS than whole SANs 7-8 years ago.
The first SAN I ever worked with was about 20 years ago
Cost close to a million dollars and was the size of two refrigerators
Just bought an NVME off of Amazon for $21 that’s faster :O