• TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    1 month ago

    Okay, so if the tool seems counterproductive for you, it’s very assuming to generalize that and assume it’s the same for everyone else too. I definitely do not have that experience.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      1 month ago

      It’s not about it being counterproductive. It’s about correctness. If a tool produces a million lines of pure compilable gibberish unrelated to what you’re trying to do, from a pure lines of code perspective, that’d be a productive tool. But software development is more complicated than writing the most lines.

      Now, I’m not saying that AI tools produce pure compilable gibberish, but they don’t reliably produce correct code either. So, they fall somewhere in the middle, and similarly to “driver assistance” technologies that half automate things but require constant supervision, it’s quite possible that the middle is the worst area for a tool to fall into.

      Everywhere around AI tools there are asterisks about it not always producing correct results. The developer using the tool is ultimately responsible for the output of their own commits, but the tool itself shares in the blame because of its unreliable nature.

    • FlorianSimon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      18 days ago

      Have you read the article? It’s a shared experience multiple people report, and the article even provides statistics.