• Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    “Deploy the fully autonomous loitering munition drone!”

    “Sir, the drone decided to blow up a kindergarten.”

    “Not our problem. Submit a bug report to Lockheed Martin.”

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

    • Chuckf1366@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Imagine a mine that could recognize “that’s just a child/civilian/medic stepping on me, I’m going to save myself for an enemy soldier.” Or a mine that could recognize “ah, CenCom just announced a ceasefire, I’m going to take a little nap.” Or “the enemy soldier that just stepped on me is unarmed and frantically calling out that he’s surrendered, I’ll let this one go through. Not the barrier troops chasing him, though.”

        There’s opportunities for good here.

  • phoneymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

    • cosmicrookie@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Especially one that is made to kill everybody else except their own. Let it replace the police. I’m sure the quality controll would be a tad stricter then

  • cosmicrookie@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    The only fair approach would be to start with the police instead of the army.

    Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police

    • Alex@feddit.ro
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      But that AI would have to be trained on existing cops, so it would just shoot every black person it sees

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.

  • cosmicrookie@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either

    • reksas@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      That is like saying you cant punish gun for killing people

      edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You can’t punish AI for doing something wrong.

      Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.