My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).
The only question missing at the start was "How many r’s are there in the word ‘veryberry’. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they’ve fixed this one by now.
Still, it’s remarkably trivial to get an LLM to provide a clearly non-human response.
Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.
Perhaps it was being influenced by the chat history. But try asking how many r’s in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you’re talking to an LLM is fairly trivial.
My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).
The only question missing at the start was "How many r’s are there in the word ‘veryberry’. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they’ve fixed this one by now.
Still, it’s remarkably trivial to get an LLM to provide a clearly non-human response.
Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.
Perhaps it was being influenced by the chat history. But try asking how many r’s in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you’re talking to an LLM is fairly trivial.