Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.
Removed by mod
Now that’s a title I wish I never read.
“Deploy the fully autonomous loitering munition drone!”
“Sir, the drone decided to blow up a kindergarten.”
“Not our problem. Submit a bug report to Lockheed Martin.”
As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.
Imagine a mine that could recognize “that’s just a child/civilian/medic stepping on me, I’m going to save myself for an enemy soldier.” Or a mine that could recognize “ah, CenCom just announced a ceasefire, I’m going to take a little nap.” Or “the enemy soldier that just stepped on me is unarmed and frantically calling out that he’s surrendered, I’ll let this one go through. Not the barrier troops chasing him, though.”
There’s opportunities for good here.
Lmao are you 12?
Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.
Especially one that is made to kill everybody else except their own. Let it replace the police. I’m sure the quality controll would be a tad stricter then
The only fair approach would be to start with the police instead of the army.
Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police
But that AI would have to be trained on existing cops, so it would just shoot every black person it sees
My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.
It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either
That is like saying you cant punish gun for killing people
edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.
Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?
You can’t punish AI for doing something wrong.
Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.
But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen
The sad part is that the AI might be more trustworthy than the humans being in control.
We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.
Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.
It will go haywire in areas for sure.