There is always a tension between security, privacy, and convenience. With how the Internet works, there isn’t really a way - with current technology - of reliably catching content like that without violating everyone’s privacy.
Of course, there is also a lack of trust here (and there should be given the leaks about mass surveillance) that the ‘stop child porn powers’ would only be used for that and not simply used for whatever the powers that be wish to do with them.
An inherent flaw in transformer architecture (what all LLMs use under the hood) is the quadratic memory cost to context. The model needs 4 times as much memory to remember its last 1000 output tokens as it needed to remember the last 500. When coding anything complex, the amount of code one has to consider quickly grows beyond these limits. At least, if you want it to work.
This is a fundamental flaw with transformer - based LLMs, an inherent limit on the complexity of task they can ‘understand’. It isn’t feasible to just keep throwing memory at the problem, a fundamental change in the underlying model structure is required. This is a subject of intense research, but nothing has emerged yet.
Transformers themselves were old hat and well studied long before these models broke into the mainstream with DallE and ChatGPT.