• 0 Posts
  • 31 Comments
Joined 9 months ago
cake
Cake day: October 11th, 2023

help-circle







  • Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.






  • While we haven’t confirmed this experimentally (ominous voice: yet), computationally there’s no reason even a simple synthetic brain couldn’t experience emotions. Chemical neurotransmitters are just an added layer of structural complexity so Church–Turing will still hold true. Human brains are only powerful because they have an absurdly high parallel network throughput rate (computational bus might be a better term), the actual neuron part is dead simple. Network computation is fascinating, but much like linear algebra the actual mechanisms are so simple they’re dead boring - but if you cram 200,000,000 of those mechanisms into a salty water balloon it can produce some really pompus lemmy comments.

    Emotions are holographic anyways so the question is kinda meaningless. It’s like asking if an artificial brain will perceive the color green as the same color we ‘see’ as green. It sounds deep until you realize it’s all fake, man. It’s all fake.







  • I am, in fact, fairly well versed in the topic. You’re 30+ years away from being able to fit hardware powerful enough to run a ML model into a missle, though I cant see a single reason you’d ever want to. Look into the declassified, 40+ year old design paradigms for missiles or other self-guided munitions and it’ll start to give you an idea of why the idea of “AI” guidance is so laughably stupid. There’s so very many reasons we use FPGAs, none of which are compatible with AI.


  • Honestly, I was just objecting to the use of “AI”. We’ve had both fire and forget and loitering munitions for decades now, neither of which use ML. Will it happen? Sure. For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform, and we dont have computing hardware powerful enough to run ML models that we can jam in a missile.

    (Though yeah we run tons of models against drone data feeds, none of those are done onboard…)