I’m currently running a Xeon E3-1231v3. It’s getting long in the tooth, supports only 32GB RAM, and has only 16 PCIe lanes. I’ve been butting up against the platform limitations for a couple of years now, and I’m ready to upgrade. I’ve been running this system for ~10yrs now.

I’m hoping to future proof the next system to also last 8-10 years (where reasonable, considering advancements in tech and improvements in efficiency), but I’m hitting a wall finding CPU candidates.

In a perfect world, I’d like an Intel with iGPU for QuickSync (HWaccel for Frigate/Immich/Jellyfin), AND I would like the 40+ PCIe lanes that the Intel Xeon Scalable CPUs offer.

With only my minimum required PCIe devices I’ve surpassed the 20 lanes available on desktop CPU’s with an iGPU:

  • Dual m.2 for Proxmox ZFS mirror (guest storage) - in addition to boot drive (8 lanes)
  • LSI HBA (8 lanes)
  • Dual SFP+ NIC (8 lanes)

Future proofing:

High priority

  • Dedicated GPU (16 lanes)

Low priority

  • Additional dual m.2 expansion (8 lanes)
  • USB expansions for simplified device passthrough (Coral TPU, Zigbee/Zwave for Home Aassistant, etc) (4 lanes per card) - this assumes the motherboard comes with at least 4-ports
  • Coral TPU PCIe (4 lanes?)

Is there anything that fulfills both requirements? Am I being unreasonable or overthinking it? Is there a solution that adds GPU hardware acceleration to the Xeon Silver line without significantly increasing power draw?

Thanks!

  • Cort@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    15 hours ago

    Sounds like it may be time to consider threadripper, combined with an Intel arc/battlemage GPU, if you want more pcie lanes and quicksync

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 hours ago

      Mine’s got all the slots filled and I think I still have spare PCIe lanes, Threadrippers are nuts.

  • jetA
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 day ago

    iGPU is just integrated GPU on the CPU die. That is going to use pcie lanes for communication.

    Wiring up a iGPU, as a cpu architect, you have two options:

    • direct interconnects (low latency, no space, no extra heat)
    • MUXed interconnects (latency, complexity, space, and heat on die), but even then you would have to choose between using the iGPU and having external PCIe lanes anyway

    I think most designers have gone with direct interconnects

    Sounds like your real requirement is just more pcie lanes, I believe epyc chips will provide in abundance

    https://www.amd.com/en/products/processors/chipsets/am5.html

    You can look at pci-e lanes available by model here.

    Also you can use newegg to search moterboards by usable pci-e lanes.

    • thumdinger@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Thanks. I’ll be the first to admit a lack of knowledge with respect to CPU architecture - very interesting. I think you’ve answered my question - I can’t have QuickSync AND lanes.

      Given I can’t have both, I suppose the question pivots to a comparison of performance-per-watt and number of simultaneous streams of an iGPU with QuickSync vs. a discrete GPU (likely either nVidia or Intel ARC), considering a dGPU will increase power usage by 200W+ under load (27c/kWh here). Strong chance I am mistaken though, and have misunderstood QuickSync’s impressive capabilities. I will keep reading.

      I think the additional lanes are of greater value for future proofing. I can just lean on CPU without HWaccel. Thanks again!

      • Mister Bean@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        If power consumption is an issue then I’d recommend the arc a310 which can only draw up to 30 watts. I’ve been using one for a while and it can easily handle several 4k streams without issue.

        • thumdinger@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          ·
          21 hours ago

          Thanks. This is a pretty compelling option. I hadn’t looked at the entry level arc, but when it comes to encode/decode it seems all the tiers are similar. 30W is okay, and it’s not a hard limit or anything, just nice to keep bills down!

  • lemming741@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    Have you considered AMD? The 70 class boards basically have an extra I/O die as the chipset on the board. I have this board with a 7700x which is way overkill. I put an A380 in the bottom slot for frigate and Plex, but honestly the amd igpu handles Plex just fine. I never tried frigate with the igpu.

    https://pg.asrock.com/mb/AMD/X670E%20PG%20Lightning/index.asp#Specification

    x16@5.0
    x16@4.0
    x1@4.0
    x1@4.0, full 16 slot

    E-key for coral

    M-key gen5x4
    M-key gen3x4
    M-key gen4x2
    M-key gen4x4

    4 x SATA3

    • thumdinger@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      21 hours ago

      I hadn’t considered AMD, really only due to the high praise I’m seeing around the web for QuickSync, and AMD falling behind both Intel and nvidia in hwaccel. Certainly will consider if there’s not a viable option with QS anyway.

      And you’re right, the south bridge provides additional PCIe connectivity (AMD and Intel), but bandwidth has to be considered. Connecting a HBA (x8), 2x m.2 SSD (x8), and 10Gb NIC (x8) over the same x4 link for something like a TrueNAS VM (ignoring other VM IO requirements), you’re going to be hitting the NIC and HBA and/or SSD (think ZFS cache/logging) at max simultaneously, saturating the link resulting in a significant bottleneck, no?

      • lemming741@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 hours ago

        The chipset link is 4x4.0 and daisy chained so 8GB per second. My use case is way more casual than you’re looking for.

        I think what’s your up against is Intel locking features behind a paywall, like they have with desktop ECC and hyper threading thru the years.

        • thumdinger@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          15 hours ago

          Thanks, I’ll need to have a look at how the chipset link works, and how the southbridge combines incoming PCIe lanes to reduce the number of connections from 24 in my example, to the 4 available. Despite this though, and considering these devices are typically PCIe 3.0, operating at the maximum spec, they could swamp the link with 3x the data it has bandwidth for (24x3.0 is 23.64GB/s, vs 4x4.0 being 7.88GB/s).