• qupada@kbin.social
    link
    fedilink
    arrow-up
    57
    ·
    11 months ago

    We’ve done this exercise recently for multi-petabyte enterprise storage systems.

    Not going to name brands, but in both cases this is usable (after RAID and hot spares) capacity, in a high-availability (multi-controller / cluster) system, including vendor support and power/cooling costs, but (because we run our own datacenter) not counting a $/RU cost as a company in a colo would be paying:

    • HDD: ~60TiB/RU, ~150W/RU, ~USD$ 30-35/TB/year
    • Flash: ~250TiB/RU, ~500W/RU, ~USD$ 45-50/TB/year

    Note that the total power consumption for ~3.5PB of HDD vs ~5PB of flash is within spitting distance, but the flash system occupies a third of the total rack space doing it.

    As this is comparing to QLC flash, the overall system performance (measured in Gbps/TB) is also quite similar, although - despite the QLC - the flash does still have a latency advantage (moreso on reads than writes).

    So yeah, no. At <1.5× the per-TB cost for a usable system - the cost of one HDD vs one SSD is quite immaterial here - and at >4× the TB-per-RU density, you’d have to have a really good reason to keep buying HDDs. If lowest-possible-price is that reason, then sure.

    Reliability is probably higher too, with >300 HDDs to build that system you’re going to expect a few failures.

    • tomatolung@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Factoring in the current year inital cost and MBTF, did you figure out an ROI on HDD vs Flash including Power and space?

      • qupada@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        Not in so much detail, but it’s also really hard to define unless you’ve one specific metric you’re trying to hit.

        Aside from the included power/cooling costs, we’re not (overly) constrained by space in our own datacentre so there’s no strict requirement for minimising the physical space other than for our own gratification. With HDD capacities steadily rising, as older systems are retired the total possible storage space increases accordingly…

        The performance of the disk system when adequately provisioned with RAM and SSD cache is honestly pretty good too, and assuming the cache tiers are adequate to hold the working set across the entire storage fleet (you could never have just one multi-petabyte system) the abysmal performance of HDDs really doesn’t come into it (filesystems like ZFS coalesce random writes into periodic sequential writes, and sequential performance is… adequate).

        Not mentioned too is the support costs - which typically start in the range of 10-15% of the hardware price per year - do eventually have an upward curve. For one brand we use, the per-terabyte cost bottoms out at 7 years of ownership then starts to increase again as yearly support costs for older hardware also rise. But you always have the option to pay the inflated price and keep it, if you’re not ready to replace.

        And again with the QLC, you’re paying for density more than you are for performance. On every fair metric you can imagine aside from the TB/RU density - latency, throughput/capacity, capacity/watt, capacity/dollar - there are a few tens of percent in it at most.

    • Empyreus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Most super computer systems have been doing away with hhds for the speed and energy efficiency causing ssds and tape to be the two forms of storage.

      • qupada@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        Being in an HPC-adjacent field, can confirm.

        Looking forward to LTO10, which ought to be not far away.

        The majority of what we’ve got our eye on for FY '24 are SSD systems, and I expect in '25 it’ll be everything.