While it uses secular and scientific aesthetics rather than being overtly religious the roadmap it presents relies on specific assumptions about the nature of consciousness, intelligence, and identity that may sound plausible but aren’t really more rational than having an immortal soul that gets judged at the end of days.
Do we have any scientific evidence that consciousness, intelligence, and identity reside anywhere else than the brain? People who lose all of their limbs don’t become more stupid. People who get artificial hearts don’t become soulless automotons. Certainly, the brain needs chemicals and hormones produced elsewhere in the body, but we successfully artificially produce these chemicals for people in whom the natural production is faulty all the time; it’s only a matter of scale.
We’re certainly far from a complete understanding of the brain, but we’re not that far. There are no great unknowns.
there’s a tendency to assume a sufficiently powerful AI can solve the problem and assume that’s the end of it.
I think that’s entirely reasonable. We’re limited by our biology; perhaps we’ll find a limit to our technology that puts an upper limit on AI growth, but we haven’t observed that horizon yet. It’s no more irrational to assume there’s a limit, than to assume that the limit is so low we can’t get super-intelligent general AI before we hit it.
Do we have any scientific evidence that consciousness, intelligence, and identity reside anywhere else than the brain? People who lose all of their limbs don’t become more stupid. People who get artificial hearts don’t become soulless automotons. Certainly, the brain needs chemicals and hormones produced elsewhere in the body, but we successfully artificially produce these chemicals for people in whom the natural production is faulty all the time; it’s only a matter of scale.
We’re certainly far from a complete understanding of the brain, but we’re not that far. There are no great unknowns.
I think that’s entirely reasonable. We’re limited by our biology; perhaps we’ll find a limit to our technology that puts an upper limit on AI growth, but we haven’t observed that horizon yet. It’s no more irrational to assume there’s a limit, than to assume that the limit is so low we can’t get super-intelligent general AI before we hit it.