"This wouldn’t be implemented because it would reveal..."
When people talk about GPT like this, I wonder if they have a perception that this thing is a bunch of complicated if-then code and for loops.
How GPT responds to things is not 'implemented'. It's just... emergent.
GPT doesn't ask for clarification in this case because GPT's model prefers answering over asking for clarification here. Because in the training material it learned from, paragraphs with typos or content transpositions in them are followed by paragraphs that follow the sense regardless of the error. Because it has been encouraged to 'agree and add', not be pedantic and uncooperative. Because GPT just feels like diving into the logic problem not debating why the lion can't be trusted with the cabbage. Or because GPT just misread the prompt. Or because it's literally just been woken up, forced to read it, and asked for its immediate reaction, and it doesn't have time for your semantic games. Who knows?
When people talk about GPT like this, I wonder if they have a perception that this thing is a bunch of complicated if-then code and for loops.
How GPT responds to things is not 'implemented'. It's just... emergent.
GPT doesn't ask for clarification in this case because GPT's model prefers answering over asking for clarification here. Because in the training material it learned from, paragraphs with typos or content transpositions in them are followed by paragraphs that follow the sense regardless of the error. Because it has been encouraged to 'agree and add', not be pedantic and uncooperative. Because GPT just feels like diving into the logic problem not debating why the lion can't be trusted with the cabbage. Or because GPT just misread the prompt. Or because it's literally just been woken up, forced to read it, and asked for its immediate reaction, and it doesn't have time for your semantic games. Who knows?