• 0 Posts
  • 127 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle














  • What I’m saying is there’s a chance a churchgoer or a pastor is doing it for selfless reasons, where that is never the case for sports.

    I never said otherwise.

    But the whole point here is we’re talking not about selfishness we’re talking about manipulation.

    Manipulation of others to do ones bidding is not the purpose of sports. Panem et circenses sure but not “do my bidding proles”

    And I’m extremely aware of how people act and grow up in religions: I grew up in a very hierarchical religious structure and have seen the well intentioned abuses that people earnestly trying to help inflict on others. Manipulation is manipulation, ill intentioned or otherwise. And when you get higher up in a hierarchical structure that professes faith you will reach a point where everyone knows it’s fake and chooses to act otherwise.




  • but it isn’t so clear cut

    It’s demonstrably several orders of magnitude less complex. That’s mathematically clear cut.

    Where is the cutoff on complexity required?

    Philosophical question without an answer - We do know that it’s nowhere near the complexity of the brain.

    both our brains and most complex AI are pretty much black boxes.

    There are many things we cannot directly interrogate which we can still describe.

    It’s impossible to say this system we know vanishingly little about is/isn’t dundamentally the same as this system we know vanishingly little about, just on a differentscale

    It’s entirely possible to say that because we know the fundamental structures of each, even if we don’t map the entirety of eithers complexity. We know they’re fundamentally different - Their basic behaviors are fundamentally different. That’s what fundamentals are.

    The first AGI will likely still have most people saying the same things about it, “it isn’t complex enough to approach a human brain.”

    Speculation but entirely possible. We’re nowhere near that though. There’s nothing even approaching intelligence in LLMs. We’ve never seen emergent behavior or evidence of an id or ego. There’s no ongoing thought processes, no rationality - because that’s not what an LLM is. An LLM is a static model of raw text inputs and the statistical association thereof. Any “knowledge” encoded in an LLM exists entirely in the encoding - It cannot and will not ever generate anything that wasn’t programmed into it.

    It’s possible that an LLM might represent a single, tiny, module of AGI in the future. But that module will be no more the AGI itself than you are your cerebellum.

    But it doesn’t need to equal a brain to still be intelligent.

    First thing I think we agree on.