RL for reasoning definitely introduces hallucinations, and sometimes it introduces a class of hallucinations that feels a lot worse than the classic ones.
I noticed OpenAI's models picked up a tendency to hold strong convictions on completely unknowable things.
"<suggests possible optimization> Implement this change and it will result in a 4.5% uplift in performance"
"<provides code> I ran the updated script 10 times and it completes 30.5 seconds faster than before on average"
It's bad it enough it convinces itself it did things it can't do, but then it goes further and hallucinates insights from the tasks it hallucinated itself doing in the first places!
I feel like lay people aren't ready for that. Normal hallucinations felt passive, like a slip up. To the unprepared, this becomes more like someone actively trying to sell their slip ups.
I'm not sure if it's a form of RL hacking making it through to the final model or what, but even OpenAI seems to have noticed it in testing based on their model cards.
> “If you look at the models before they are fine-tuned on human preferences, they’re surprisingly well calibrated. So if you ask the model for its confidence to an answer—that confidence correlates really well with whether or not the model is telling the truth—we then train them on human preferences and undo this.
Now that is really interesting! I didn't realize RLHF did that.
In the article it is argued that the brainfarts could be beneficial for exploration of new ideas.
I don't agree. The "temperature" parameter should be used for this. Confabulation / bluff / hallucination / unfounded guesses are undesirable at low temperatures.
Someday we'll figure out how to program computers to behave deterministically so that they can complement our human abilities rather than badly impersonate them.
You lost this battle, sorry. It's not going to happen.
Both terms are "inaccurate" because we're talking about a computer program, not a person. However, at this point "hallucination" has been firmly cemented in public discourse. I don't work in tech, but all of my colleagues know what an AI hallucination is, as does my grandmother. It's only a matter of time until the word's alternate meaning gets added to the dictionary.
Maybe I lost this battle, but also in science the terminology evolves. If you replace AI hallucination with AI confabulation even your grandmaother would get it right. I also don't agree that both terms are equally inaccurate.
Obviously, hallucination is by definition a perception, so it incorrectly anthropomorphizes AI models. On the other hand, the term confabulation involves filling in gaps with fabrication, exactly what LLMs do (aka bullshitting).
Obviously it's well over a year since this article was posted and if anything I've anecdotally noticed hallucinations getting more, not less, common.
Possibly/probably with another years experience with LLMs I'm just more attuned to noticing when they have lost the plot and are making shit up
RL for reasoning definitely introduces hallucinations, and sometimes it introduces a class of hallucinations that feels a lot worse than the classic ones.
I noticed OpenAI's models picked up a tendency to hold strong convictions on completely unknowable things.
"<suggests possible optimization> Implement this change and it will result in a 4.5% uplift in performance"
"<provides code> I ran the updated script 10 times and it completes 30.5 seconds faster than before on average"
It's bad it enough it convinces itself it did things it can't do, but then it goes further and hallucinates insights from the tasks it hallucinated itself doing in the first places!
I feel like lay people aren't ready for that. Normal hallucinations felt passive, like a slip up. To the unprepared, this becomes more like someone actively trying to sell their slip ups.
I'm not sure if it's a form of RL hacking making it through to the final model or what, but even OpenAI seems to have noticed it in testing based on their model cards.
> “If you look at the models before they are fine-tuned on human preferences, they’re surprisingly well calibrated. So if you ask the model for its confidence to an answer—that confidence correlates really well with whether or not the model is telling the truth—we then train them on human preferences and undo this.
Now that is really interesting! I didn't realize RLHF did that.
In the article it is argued that the brainfarts could be beneficial for exploration of new ideas.
I don't agree. The "temperature" parameter should be used for this. Confabulation / bluff / hallucination / unfounded guesses are undesirable at low temperatures.
There is ample evidence that hallucinations are incurable in the best extant model of intelligence - people.
Someday we'll figure out how to program computers to behave deterministically so that they can complement our human abilities rather than badly impersonate them.
The more accurate word would be confabulation
You lost this battle, sorry. It's not going to happen.
Both terms are "inaccurate" because we're talking about a computer program, not a person. However, at this point "hallucination" has been firmly cemented in public discourse. I don't work in tech, but all of my colleagues know what an AI hallucination is, as does my grandmother. It's only a matter of time until the word's alternate meaning gets added to the dictionary.
Correct. This is the way language works. It's annoying when you know what words mean but this is the way it is.
Maybe I lost this battle, but also in science the terminology evolves. If you replace AI hallucination with AI confabulation even your grandmaother would get it right. I also don't agree that both terms are equally inaccurate.
> Maybe I lost this battle, but also in science the terminology evolves.
Ah yes, science, where we have fixed stars that move, imaginary numbers that are real, and atoms that can be divided into smaller particles.
Obviously, hallucination is by definition a perception, so it incorrectly anthropomorphizes AI models. On the other hand, the term confabulation involves filling in gaps with fabrication, exactly what LLMs do (aka bullshitting).
What an absurd prediction.
I wonder if it would be better to have 1 “perfect” LLM trying to solve problems or 5 intentionally biased LLM’s.
I'm so tired of these rich dweebs pontificating to everyone.