Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reiterating that only tokens are being generated is underlining that this ALL that is happening- No reasoning, no understanding.

Its not a dig at your knowledge.

ANd I can say this with some authority, because I have been trying to build LLM enabled tools that work only if LLMs can reason and plan. They don’t - they simply generate text.

You can test it out with building your own agent, or your own chained LLMs.

LLMs are analogus to an actors with memorized lines. They can sound convincingly like Doctors, but this is only skin deep.

To make it simpler - Karpathy said it in July, and the OpenAI CTO said it a few weeks ago - it’s easy to make PoCs but very hard to build production ready GenAI tools.

Our bags of flesh may be machines, however those machines are not simply biological LLMs.



> To make it simpler - Karpathy said it in July, and the OpenAI CTO said it a few weeks ago - it’s easy to make PoCs but very hard to build production ready GenAI tools.

If you try to measure if something is intelligent by trying to put it into a production ready workflow, then your measurement might be somewhat skewed... I mean, I am not trying to judge the intelligence of toddlers by putting them into a production ready workflow.

If a pig could converse with me at the level of ChatGPT 4, I am not sure if I could eat it.

> Our bags of flesh may be machines, however those machines are not simply biological LLMs.

I don't think we are just machines, and if we were, I don't know if we function the same as LLMs. But whatever it is that LLMs do, they are clearly intelligent, so they demonstrate one way that intelligence works. Harnessing this intelligence for something else than mere chats is a challenge, of course. I am using them as well to build something, and their current limitations are obvious. But even with these limitations, they allow me to do stuff I would not have thought possible a year ago.


If it’s a generalized thinking system, if it understands then production/non production is a meaningless difference.

I dont see how that is an argument.

You see intelligence, so I would urge you to build something that relies on that feature.

My philosophy was that the fastest way to figure out the limits of a tool are to push it. Limits describe the tool.

The data I have is on the limits of the tool. As a result its clear that there is no “intelligence”.


I guess we have different perspectives on this. I don't define the intelligence of something by how well I can turn it into a tool. Obviously, the hope is that with increasing intelligence this becomes easier, but this is not necessarily so, and you might have to find the right way to harness its intelligence. A simple approach might not work. But just because you failed to harness its intelligence for your purpose, doesn't mean it isn't intelligent.

As I said before, is a baby intelligent? Of course. Could you use it for any kind of "production purpose"? I hope not. What about a 3-year old? You will have noticed that it can be difficult to get full-blown intelligent adults to do what you want them to do. This might even get more difficult with increasing intelligence.


The difference here is between code that is intelligent, and the text version of autotune.

And again, I ask you to put your money where your mouth is. If you are willing to assume I wasn't able to harness its intelligence, please prove me wrong.

There is nothing I would really want more, than to have genuinely autonomous systems.

My point at the start, and now is that by using Human terms to examine this phenomenon leads people to assume things about what is going on.

Testing and Evidence are what reason is built on. Asking someone to follow the scientific method, I would hope does not construe boorishness on my part.


Not sure how much more I can explain my point of view. I have used ChatGPT 4 for many tasks which require intelligence. Others have too. It worked, many many times. Summarising an unknown text for example requires intelligence. Proving a novel version of a mathematical theorem requires intelligence. Translating natural language into a new logic I just invented requires intelligence. Plenty of testing and evidence here. It also failed many times, but often this was because of inherent ambiguity in the task, which it helped to expose. That's pretty intelligent.

The scientific method only works if you accept the evidence. Some people don't believe that we landed on the moon. Well.

You are telling me you could not use it for what you would have hoped to use it for, and you are not allowing the use of the term intelligent until an LLM can do that for you. If that is your definition of intelligence, good for you.

But I would suggest the following instead: What the scientific method has proven is that, if you feed a very simple mechanism with enough data, then intelligence emerges.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: