Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hold on a sec, think about what you're saying. The ability of a larger system to imitate a smaller system in some situations means they're they're mostly the same? Since when?

If I told you that an LLM was basically a Markov chain because there are situations where the Markov chain and LLM produce similar output and where you could reasonably argue that it's possible the LLM is working the same way that the Markov chain is working -- you would (correctly) say I was oversimplifying what's going on in GPT. It would be a terrible comparison to make. Similarly, if you say that human reasoning is the result of an LLM, and what you're actually saying is that in a subset of situations humans produce output that could theoretically be working the same way as an LLM, I'm gonna say you're oversimplifying how human brains work.

Very clearly, there is more going on in a human brain than language, evidenced by the fact that our brains are larger than our language centers and we can literally measure which parts of the human brain experience the most activity when we're tackling different tasks.

> Sure there's other stuff going on, but a lot of it is language and a lot of that subset seems to be quite closely approximated by these larger LLMs, warts and all

If you define your tests and scenarios to specifically encompass only the situations in which similar outputs are produced, then sure. But that's not a very strong argument for you to make. It's like saying chickens are basically the same as fish since both of them lay eggs. The phrase "sure there's other stuff" is doing a lot of work there, because "there's other stuff" is what we're all saying when we say that LLMs aren't just primitive humans -- LLMs are different from humans in the sense that when we reason, there's other stuff going on and the entire process is not reducible to only language generation.

----

Of course, absent from this conversation is the fact that the way our language center develops is different from how LLMs are trained and even in the parallels you're drawing, LLMs demonstrate different strengths and weaknesses from humans -- humans demonstrate reasoning capabilities faster than proficiency with language/text, LLMs demonstrate proficiency with text faster than they demonstrate reasoning capabilities. Even in situations where both LLMs and humans predict text, it's likely that we're using different strategies to do so given our differing capabilities.

Look I am not even making a claim about whether GPT can reason. Defining intelligence through a purely human lens would be unimaginative and needlessly narrow. Whether GPT actually reasons or just appears to is a different conversation. I haven't touched that conversation, all that I'm saying is, it's very obvious that whatever GPT is doing, it is different from how human brains work.



I can agree with all of that.

I was pushing back against something that might not have been there initially: an unwillingness to accommodate the idea that a fairly significant part of our intelligence is actually stored in our language and that it can seem subjectively (perhaps excluding those with no internal monologue) that we recall that knowledge in a manner which is close enough to the one we have managed to replicate in the larger LLMs.

This part of our function could well be a lazy optimization that in reality sits on top of our reasoning capabilities to save energy, but the point of the trite task with the bird in the hand was just to demonstrate that it seems to play a fairly significant role. I'm out of my depth entirely with respect to the actual form and function of the brain as per the state of research today.

I'm willing to admit though that when I replied to you I was probably really replying to a lot of other commenters, many of whom had stronger objections to the idea that LLMs are now knocking on the door of intelligence at least to the point where we feel the need to redefine it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: