• Xx255q@alien.topB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I suppose the only way to get them is to wait till the drives as sold as refurbished

  • spryfigure@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    wait until next year, when Seagate launches its first 30TB HAMR HDD.

    So in 2024 then. Let’s wait for the price.

    • WhittledWhale@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      That is what next year means I’m this case, yes.

      Congrats on knowing it’s currently 2023 I guess.

  • Constellation16@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Shame this sub doesn’t have an original source policy, because this regurgitated article absolutely sucks.

  • xupetas@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Will not ever buy seagate again. .|. them. Worst support, RMA, and customer care ever.

  • IntensiveVocoder@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    From a quick skim, the author seems to have not noted that these are host-managed, so they’re not particularly useful individually.

    Instead of holding all of the management logic on the drive itself, that’s done at the appliance level to manage load across disks—so these wouldn’t work in standard NAS devices, unless Seagate provides a binary or API for Synology or QNAP to implement in their firmware.

  • fmillion@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I wonder how they do this. Are the drives even SAS/NVMe/some standard interface, or are they fully proprietary? What “logic” is being done on the controller/backplane vs. in the drive itself?

    If they have moved significant amounts of logic such as bad block management and such to the backplane, it’s an interesting further example of “full circle” in the tech industry. (e.g. we started out using terminals, then went to locally running software, and now we’re slowly moving back towards hosted software via web apps/VDI.) I see no practical reason to do this other than (theoretically) reducing manufacturing costs and (definitely) pushing vendor lock-in. Not like we haven’t seen that sorta stuff done with e.g. NetApp messing with firmware on drives though.

    However if they just mean that the 29TB disks are SAS drives and the enclosure firmware implements some sort of proprietary filesystem and that the disks are only officially supported in their enclosure, but the disk could operate on its own as just a big 29TB drive, we could in theory get these drives used and stick them in any NAS running ZFS or similar. (I’m reminded of how they originally pitched the small 16/32GB Optanes as “accelerators” and for a short time people weren’t sure if you could just use them as tiny NVMe SSDs - turned out you could. I have a Linux box that uses an Optane 16GB as a boot/log/cache drive and it works beautifully. Similarly those 800GB “Oracle accelerators” are just SSDs, one of them is my VM store in my VM box.)

  • CryGeneral9999@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is all nice but when it takes 3-weeks to check the volume or add a drive that’s gonna suck. With spinning media there’s a benefit to more smaller drives since you can read/write from many. I’m not saying I wouldn’t want these just that if I didn’t have petabytes of data I’d stick with more drives of smaller size. Unless of course their speed increases. Spinning media isn’t getting faster as quickly as it’s getting bigger. So. When your scrubbing or anything I would expect you could scrub 5x20tb a lot quicker than 1x100tb. So. As I see it. This is niche for me.

  • uluqat@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    “The 4RU chassis can be fitted with 106 3.5-inch hard drives…”

    106 x 29 = 3074 terabytes

    2.5 petabytes = 2500 terabytes

    3074 - 2500 = 574 terabytes

    574 / 29 = 19.79 so about 19 or 20 drives out of the total 106 used for (parity? hot swap?).

    I have a feeling that this might be slightly out of my budget.