Your reasoning seems clear enough to me. cmiiw, you’re saying Marcus says LLMs don’t really understand language and only present an illusion of that understanding. And that illusion will noticeably break at a certain scale. And to be honest when context windows get filled up to a certain point, they do become unintelligible and stupid.
In spite of this, I think LLMs display intelligence and for me that is more useful than their understanding of language. I haven’t read anything from Chomsky tbh.
The utility of LLMs come from their intelligence and the price point at which it is achieved. Ideally the discussion should focus on that. The deeper discussion of AGI should not worry the policy makers or the general public. But unfortunately, business seems intent on diving into the philosophical arguments of how to achieve AGI because that is the logic they have chosen to convince people into giving them more and more capital. And that is what makes Marcus’ and his friends’ critique relevant.
One can’t really critique people like Marcus saying he is being academic and pedantic on LLM capabilities, are they real, are they not etc when the money is relentlessly chasing those un-achieved capabilities.
So even though you’re saying we aren’t talking about AGI and this isn’t the topic, everything kind of circles back into AGI and the amount of money being poured into chasing that.
I would appreciate if you and the GP not personally insult me when you have a question though. You may feel that you know Marcus to be into one particular thing but some of us have been familiar with his work long before he pivoted to AI.
I'm sorry, I didn't mean to insult you. To explain the reason: you seem to use some particular wordings that just seem strange to me, such as first saying that Marcus position is that "LLMs are impossible" which is either false or incredibly imprecise shortcut for "AGI using LLMs is impossible", and then claiming it was beautiful.
I didn't mean to attack you personally and I'm really sorry if it sounded this way. I appreciate the generally positive atmosphere on HN and I believe it more important than the actual argument, whatever it may be.
The first is that your phrasing "that LLMs are not possible or at least that they're some kind of illusion" collapses the claim being made to the point where it looks as if you're saying Marcus believes people are just deluded that something called a "LLM" exists in the world. But even allowing for some inference as to what you actually meant, it remains ambiguous whether you are talking about language acquisition (which you are in your 2nd paragraph) or the genuine understanding and reasoning / robust world model induction necessary for AGI, which is the focus of Marcus' recent discussion on LLMs, and why we're even talking about Marcus here in the first place.
You seem more familiar with Marcus' thinking on language acquisition than I, so I can only assume that his thinking on language acquisition and LLMs is somewhat related to his thinking on understanding and reasoning / world model induction and LLMs. But it doesn't appear to me, based on what I've read of Marcus, that his claims about the latter really depend on Chomsky. Which brings me to the 2nd problem with your post, where you make the uncharitable claim that "he appears to me and others to be having a sort of internal crisis that's playing out publicly", as if it were simply impossible to believe that LLMs are not capable of genuine understanding / robust world model induction otherwise.
And my phrasing was wonderful and perfect.