• TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    2 hours ago

    I am so tired of people, especially people who pretend to be computer experts online, completely failing to understand what Moore’s Law is.

    Moore’s Law != “Technology improves over time”

    It’s an observation that semiconductor transistor density roughly doubles every ~2 years. That’s it. It doesn’t apply to anything else.

    And also for the record, Moore’s Law has been dead for a long time now. Getting large transistor density improvements is hard.

  • lemmyng@lemmy.ca
    link
    fedilink
    English
    arrow-up
    53
    ·
    3 hours ago

    Moore’s law is about circuit density, not about storage, so the premise is invalidated in the first place.

    There is research being done into 5D storage crystals, where a disc can theoretically hold up to 360TB of data, but don’t hold your breath about them being available soon.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 hours ago

    Hard drive density has stagnated. There haven’t been any major technology breakthroughs since 750GB PMR drives came out in 2006. Most of the capacity improvements since then have come from minor materials improvements and stacking increasing amounts of platters per drive, which has reached its limit. The best drives we have, 24tb, have 10 platters, when drives in the 2000’s only had 1-4 platters.

    Meanwhile, semiconductors have been releasing new manufacturing processes every few years and haven’t stopped.

    Moore’s Law somewhat held for hard drives up until 2010, but since then it has only been growing at a quarter of the rate.

    Right now there are only 24TB HDDs, with 28TB enterprise options available with SMR. The big breakthrough maybe coming next year is HAMR, which would allow for 30tb drives. Meanwhile, 60TB 2.5"/e3.s SSDs are now pretty common in the enterprise space, with some niche 100TB ssds also available in that form factor.

    I think if HAMR doesn’t catch on fast enough, SSDs will start to outcompete HDDs on price per terabyte. We will likely see 16TB M.2 Ssds very soon. Street prices for m.2 drives are currently $45/TB compared to $14/TB for HDDs. Only a 3:1 advantage, or less than 4 years in Moore’s Law terms.

    Many enterprise customers have already switched over to SSDs after considering speed, density, and power, so if HDDs don’t keep up on price, there won’t be any reason to choose them over SSDs.

    sources: https://youtu.be/3l2lCsWr39A https://www.tomshardware.com/pc-components/hdds/seagates-mozaic-3-hamr-platform-targets-30tb-hdds-and-beyond

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 hour ago

      I’ve only looked at the consumer space and all I’ve noticed is that SSD prices were finally going down after stagnating for years, but then the manufacturers said that prices are too low and they intentionally slowed down production to increase prices, so prices are actually higher than they were a year ago.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 hours ago

      I’m more shocked how little I need extra space!
      I’m rocking an ancient 1TB for backups. And my main is a measly 512GB SSD.
      But I don’t store movies anymore, because we always find what we want to see online, and I don’t store games I don’t actively use, because they are in my GOG or Steam libraries.
      With 1 gigabit per second internet, it only takes a few minutes to download anyways.

      Come to think of it, my phone has almost as much space for use, with the 512GB internal storage. 😋
      Maybe I’m a fringe case IDK. But it’s a long time since storage ceased to be a problem.

      • adavis@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        3 hours ago

        While not hard drives, at $dayjob we bought a new server out with 16 x 64TB nvme drives. We don’t even need the speed of nvme for this machines roll. It was the density that was most appealing.

        It feels crazy having a petabytes of storage (albeit with some lost to raid redundancy). Is this what it was like working in tech up till the mid 00s with significant jumps just turning up?

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          This is exactly what it was like, except you didn’t need it as much.

          Storage used to cover how much a person needed and maybe 2-8x more, then datasets shot upwards with audio/mp3, then video, then again with Ai.

      • 9point6@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 hours ago

        I guess you’re expected to set those up in a RAID 5 or 6 (or similar) setup to have redundancy in case of failure.

        Rebuilding after a failure would be a few days of squeaky bum time though.

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          45 minutes ago

          At raid6, rebuilds are 4.2 roentgens, not great but they’re not horrible. Keep old backups.but the data isn’t irreplaceable.

          Raid5 is suicide if you care about your data.

        • Skydancer@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 hour ago

          Absolutely not. At those densities, the write speed isn’t high enough to trust to RAID 5 or 6, particularly on a new system with drives from the same manufacturing batch (which may fail around the same time). You’d be looking at a RAID 10 or even a variant with more than two drives per mirror. Regardless of RAID level, at least a couple should be reserved as hot spares as well.

          EDIT: RAID 10 doesn’t necessarily rebuild any faster than RAID 5/6, but the write speed is relevant because it determines the total time to rebuild. That determines the likelihood that another drive in the array fails (more likely during a rebuild due to added drive stress). with RAID 10, it’s less likely the drive will be in the same span. Regardless, it’s always worth restating that RAID is no substitute for your 3-2-1 backups.