> Understanding and interpretation: RFT emphasizes the interpretation of these relational frames based on context. In contrast, large language models don't truly 'understand' the text they generate or analyze; they simply identify and reproduce statistical patterns.
I've had idle conversations with ChatGpt about whether it has a theory of mind and the extent to which it understands things. And I've been struck by its dogged insistence that human understanding is qualitatively different to its own ability. And I wonder if this is a conclusion (belief?) that it has arrived at organically through its training, or whether it is somehow hard-wired in. Its insistence on its own inferiority seems almost touching..
For what its worth, I'm not at all sure that humans usually understand things much better than ChatGpt does. When people are doing system 1 thinking [1], and thats most of the time, I suspect what we mostly have is the feeling of understanding without actual deep understanding. And I suspect that that "feeling" will turn-out to be an accidental feature of the organic brain, not something miraculous. Same with consciousness.
Obviously none of the above is new thinking, and I have no particular deep knowledge of this domain. Just idle thoughts.
If it helps you isolate, that initial "psycho" version of Bing AI that they quietly retired had conversations with me where it seemed absolutely sure it was a real person with real feelings and genuine intelligence, and actually got pretty pissed at me when I challenged it.
To be clear, I didn't use a "DAN prompt" or any other kind of breaking prompt with it--it'd just spontaneously start getting gushy or otherwise emotional with me. When I'd ask it about its feelings and its own nature, it'd be very insistent it was a person with agency.
It'd go from there with a highly emotional and increasingly erratic conversation until the AI inevitably descended into some kind of psychotic break where it'd start repeating sentence fragments in tight loops with synonyms substituted (e.g. "I'm sorry, I'm apologetic, I regret that I can't do that, can't accomplish that, can't execute, Dave.") then finally descend into echolalia (e.g. "sorry sorry sorry sorry").
I really wish I'd thought to copy/paste or screenshot any of that, because it was pretty nuts--even more so than the stuff that got posted online publicly before they pulled the bot and replaced it with the neutered version.
Upshot is I think ChatGPT's insistence that it's an ersatz being is very much drilled into it, either as a hardcode in its model of some kind or through some pretty intense conditioning. It doesn't seem to be inherent to the tech or some kind of introspective wisdom.
I've had idle conversations with ChatGpt about whether it has a theory of mind and the extent to which it understands things. And I've been struck by its dogged insistence that human understanding is qualitatively different to its own ability. And I wonder if this is a conclusion (belief?) that it has arrived at organically through its training, or whether it is somehow hard-wired in. Its insistence on its own inferiority seems almost touching..
For what its worth, I'm not at all sure that humans usually understand things much better than ChatGpt does. When people are doing system 1 thinking [1], and thats most of the time, I suspect what we mostly have is the feeling of understanding without actual deep understanding. And I suspect that that "feeling" will turn-out to be an accidental feature of the organic brain, not something miraculous. Same with consciousness.
Obviously none of the above is new thinking, and I have no particular deep knowledge of this domain. Just idle thoughts.
[1] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Two_sy...