> please do not mention any deterministic counting machines like semiconductors - neurons are not deterministic and thought isn’t a deterministic set of calculations
I have to mention it, because it's physics, and related to the current implementation of LLMs: random numbers are possible, and used to break determinism. Intel CPUs use thermal noise to generate random numbers [1]. With silicon, randomness is a free choice, not an impossibility. LLMs front ends, like anything from OpenAI, use random numbers to get non-deterministic output, which is also the input of the next word, and the context of the next response, resulting in output that's not deterministic, with broad divergence [2]. Both systems are somewhat bound by the "sensibility"/logic of the output though, of course.
> as evidenced by how easily we are fooled by optical illusions
This isn't neccesarily unique to humans [3]. Do you have a specific illusion in mind? Many are related to active "baselining", and other time response things, that happens in our eyes, with others being an incorrect resolution of real ambiguity, from a two sensor system, that any sensor system will struggle with.
> Just because we think two things appear the same does not mean they are
I don't think anyone is suggesting they're the same, but I see many people suggesting, with seemingly undue confidence, that they're completely unrelated, which would require an understanding of either system that we don't have.
> To put your argument differently - ...
My argument is, if it's 3d to your eyes, then that understanding, and relation, between 2d and 3d space already exists in the system, to some degree.
I know that there are various strategies for picking pseudo random numbers, but what I mean by determinism is that as far as we can tell, the activation trigger for a neuron is highly dependent on outside signals like hormones. That means that a biological “dot product” will be different if you are tired vs. well rested.
A dot product run on Intel CPUs are intended to always be the same no matter what. The heat signature stuff isn’t changing the way circuits do math.
As to optical illusions, the point I was making was that “the way humans practically perceive something” is not a sufficient way to measure the similarness of two things, since our perceptions are so often tricked (LLMs are literally trained specifically to trick you into thinking you’re talking to sentience).
I also don’t think they are completely unrelated at all. As I said I know we designed one to be like the other. It’s just by no means “the same.” AI may one day do convergent evolution toward how brains think, but it’s still important to recognize that’s convergent evolution.
I see all of this as implementation details of a single biological system that exhibits "thought", not a definition of "thought".
> It’s just by no means “the same.”
The implementation is not the same. Everyone agrees with that. The concept being compared is "thought", not "implementation of thought". Maybe I'm lost.
I'm just saying that "thinking" is a well-defined class of related things that only animals do.
I would argue expanding the definition to include the kind of things that semiconductors do really dilutes the meaning of "thinking" to be near meaningless.
Maybe to put it succinctly - no computer has ever done anything without a human input (even if it's millions of layers abstracted), but thinking just happens spontaneously.
If that's not a sufficient condition to differentiate 'thinking' from 'calculating,' then IDK what 'thinking' even means then.
I have to mention it, because it's physics, and related to the current implementation of LLMs: random numbers are possible, and used to break determinism. Intel CPUs use thermal noise to generate random numbers [1]. With silicon, randomness is a free choice, not an impossibility. LLMs front ends, like anything from OpenAI, use random numbers to get non-deterministic output, which is also the input of the next word, and the context of the next response, resulting in output that's not deterministic, with broad divergence [2]. Both systems are somewhat bound by the "sensibility"/logic of the output though, of course.
> as evidenced by how easily we are fooled by optical illusions
This isn't neccesarily unique to humans [3]. Do you have a specific illusion in mind? Many are related to active "baselining", and other time response things, that happens in our eyes, with others being an incorrect resolution of real ambiguity, from a two sensor system, that any sensor system will struggle with.
> Just because we think two things appear the same does not mean they are
I don't think anyone is suggesting they're the same, but I see many people suggesting, with seemingly undue confidence, that they're completely unrelated, which would require an understanding of either system that we don't have.
> To put your argument differently - ...
My argument is, if it's 3d to your eyes, then that understanding, and relation, between 2d and 3d space already exists in the system, to some degree.
[1] https://www.intel.com/content/www/us/en/developer/articles/g...
[2] https://www.coltsteele.com/tips/understanding-openai-s-tempe...
[3] https://blog.frontiersin.org/2018/04/26/artificial-intellige...