I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • Lmaydev@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    10 months ago

    It’s actually a decimal Vs binary thing.

    1000 and 1024 take the same amount of bytes so 1024 makes more sense to a computer.

    Nothing to do with metric as computers don’t use that. Also not really to do with units.

    • fiercetemptation@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      edit-2
      10 months ago

      It has everything to do with the metric system. And you got it exactly the wrong way around.

      Kilo is simply an SI-prefix. It’s thousand. See: https://en.wikipedia.org/wiki/Kilobyte. Let me quote that here: “The kilobyte is a multiple of the unit byte for digital information. The International System of Units (SI) defines the prefix kilo as a multiplication factor of 1.000; therefore, one kilobyte is 1.000 bytes.”

      That specifically is where the confusion arises. Someone went and said “oh, computers count in binary so a kilobyte is 1.024.” It’s not. A kilobyte is 1.000 bytes, because kilo is thousand.

      To help fix the confusion, a different prefix was created: kibi which is specifically for powers of 2.

      The thing is: for people not using the metric system your argument may have merit. But once you have accepted that metric is superior in literally every way (also why NASA etc all use metric), this confusion just disappears.