One challenge with the 4090 specifically is I don’t believe there are any dual-slot variants out there, even my 4080 is advertised as a triple-slot card (and actually takes four because Zotac did something really, really annoying with the fan mounting)…you could liquid-cool and swap the brackets, but then you have the unenviable task of mounting sufficient radiators and support equipment (pump, res, etc) into a rackmount server. That assumes you’re looking at something 2-3U, since you mentioned an R730; if you’re willing to do a whitebox 4U build it’s a lot more doable.
Of course if money is no object, ditch plans for the GeForce cards and get the sort of hardware that’s made to live in 2U/3U boxes, i.e. current-gen Tesla (or Quadro, if you want display outputs for whatever reason). If money is an object, get last-gen Teslas. Tossed an old Tesla P100 (Pascal/10-series) into my Proxmox server to replace a 2060S with half the VRAM, for LLMs I didn’t really notice an obvious performance decrease (i.e. still inferences faster than I can read), and in a rack server you won’t even have to mess with custom shrouds for cooling, since the fans in the server are going to provide more than enough directed airflow.
Doesn’t need bifurcation, it’s just four lanes straight to the BCM5719 chip, looks like it does support SR-IOV if that’s something you care about.