Omg, none of this is true in the least.
Altman is admitting that LLMs will not achieve "AGI," and straining to not say what I've been saying for a couple years: that transformer-based LLMs are at the limits of their capabilities.
There are no new model architectures on the horizon. None. LeCun just got $1B to continue work on his Jepa models that (1) went nowhere, and (2) got him fired from Meta.
The amount of disinformation out there is startling.