Cool rack! The setup looks very neat and clean. Definitely a big step forward. Nicely done!
Cool rack! The setup looks very neat and clean. Definitely a big step forward. Nicely done!
It should take up to 138GB for boot but you can also change the settings during the installation: https://kb.vmware.com/s/article/81166
A lot really. Mostly related to the projects we do for our customers at work like disaster recovery with vSphere Replication, Hyper-V Replica, Zerto, HA clusters with XCP-NG, VMware vSAN, Starwinds vSAN and so on. Also, Zabbix, TrueNAS, CheckMK.
I would honestly just go with two separate mirrors unless you need performance (of course if all drives are CMR and on the same RPM level). With MDRAID or ZFS.
There shouldn’t be any issue with running Kubernetes in a Linux VM on Virtualbox. At least as far as I can tell. You can just try it.
NAS drives come with a longer warranty and are a bit optimized for 24/7 operation. As to and SSD, if you need uptime, then RAID. SSD can fail just as an HDD. Also, keep in mind that RAID is not a backup. Also, with backups, ideally, follow the 3-2-1 rule: https://www.backblaze.com/blog/the-3-2-1-backup-strategy/
For a starting homelab, I would look into Optiplex. Would be more powerful than a laptop (most likely). As mentioned, i5 or it or higher.
What’s gonna be the workload? I mean, there is caching of course but you could put some most performance-demanding VMs on NVMe drives in ZFS mirror, some slower VMs on 2x4TB drives in mirror and the rest (file, media server on 8x8TB drives in RAIDZ2).
Sorry if I missed that but what’s gonna be the OS? I mean, on Linux, you can just use Linux Software RAID which is old but gold and will have better performance than ZFS. Otherwise, there are tri-mode NVMe/SAS/SATA controllers.
Honestly, I wouldn’t go with Storage Spaces. Just unreliable. If you’re willing to take the risks, forget about GUI for a proper performance: https://storagespaceswarstories.com/storage-spaces-and-slow-parity-performance/ Also, no to RSTe. I would personally go with a hardware RAID controller on Windows.
Are they add just as any other drives to the RAID controller or hot spares as mentioned? Also, are they the same drives as others?
I think that falls under the homedatacenter category:) Looks very decent though!
If you’re looking to use it to run some VMs and containers - Proxmox. If you’re going to use it as a NAS - TrueNAS.
Is boot drive selected in the boot order? Probably yes, but just to check. You can also try using Starwinds free converter: https://www.starwindsoftware.com/starwind-v2v-converter to convert the disks to raw or qcow2.
I’m using an old Synology for backups. But for a NAS, you could build a DIY machine and put TrueNAS on it.
There will be no performance gain even if you passthrough NVMe drives to that TrueNAS VM as you still need connect storage back to Hyper-V (iSCSI or SMB). You’ll just add more latency.
Yes, that’s possible and it will protect against a drive failure. But of course, you’ll get the speed of the slower drive. Also, keep in mind that RAID is not a backup.
If you have an old PC, that could very well be a start. Otherwise, Dell Optiplex or Intel NUCs will be more powerful than Pi at the same price. Throw Proxmox on it and you have yourself a homeserver.
TrueNAS needs direct access to drives to ensure proper corruption detection and repair which is not possible with a hardware RAID controller. But if you’re on ESXi, you could just deploy a Linux VM with Plex on a hardware RAID datastore.
Option 1: VMware vSAN (plus witness on some other machine): https://core.vmware.com/resource/vsan-2-node-cluster-guide or Starwinds vSAN: https://www.starwindsoftware.com/vsan that has a free option if I’m not mistaken with just two R630s and decommission R620. Lower power consumption and you get proper HA.
Option 2: Use R620 as a TrueNAS system providing storage over NFS or iSCSI to ESXi cluster. That R620, however, becomes a single point of failure and consumes more power.