I was wondering what your approach is to using available SATA ports for your array.
I have six MB SATA ports and eight SATA available via HBA (9211-8i)
Do you recommend populating all available motherboard SATA ports first, then using an HBA for the rest of the array? Is it better to have all of the data disks if possible on the HBA first as a priority?
Do you guys recommend keeping many disks as possible on the same controller? (i.e. populating HBA first)
I trust MB SATA more in terms of reliability. HBAs tend to overheat too.
However, if RAID topology allows, I’d try to spread the drives such that either one of MB or HBA failing completely would not bring the array down (RAID10 with 1 HBA, or RAID5/6 with 2 HBAs).
I use motherboard ports first and don’t install a HBA unless needed, because it consumes a lot of power and prevents cpu idle
Yep; unless you need SAS support I would recommend onboard SATA first.
OP, I have the same HBA card as you, it gets toasty even just idling, and even hotter once you throw a load onto it. I measured ~10W power use just idling (no drives attached to HBA). I almost guarantee using onboard SATA will be more power efficient.
Even better, is if you can physically remove the HBA card until you need it.
I’ve heard a few horror stories of LSI HBA’s causing some serious data corruption. Most of these cases were due to insuffiencient airflow. When it comes to data integrity I wonder if LSI HBA’s in IT mode have more or less ability to detect errors or increase/decrease the risk of data corruption?
I’ve heard that overheating is more of an issue with the SAS 12Gbit HBAs, not the older 6Git ones like yours
Depends on your configuration.
If you do not need to passthrough the whole HBA to a VM for example, then, just go with the most convenient way for you.
on my mobo and cpu, I use the hba first as theres more bandwidth. mobo sata last as one of my two nvme ports runs through chipset and chipset to cpu is limited bandwidth, roughly 4GB/s total so sata and one nvme competes for that.
if you have a cpu/mobo thats made in the last year or two you would have more speed for that interconnect and its not that big of a worry.
I bought a cheap HBA because then I can virtualise the PCI-e card to a VM and use it in ZFS.
Granted I could probably do the same with an onboard SATA controller, but I have more faith in a dedicated controller for my array.