Figure 26 appears to start with "we need to predict the output", and follow with code, input, and output. Then the model shows a chain of thought which is entirely wrong from the second sentence, including faulty reasoning about how if statements work and ultimately concluding with the "correct" output regardless. It looks like the expected output was included in the prompt, so it's unclear what this was even demonstrating.
Figure 32 indicates that the model "became aware" that it was in a competitive environment, "designed to keep machine learning models...guessing". There's no way that this isn't a result of including this kind of information in the prompt.
Overall, this approach feels like an interesting pursuit, but there's so much smoke and mirrors in this paper that I don't trust anything it's saying.
It’s overhyped, filled with marketing language.
In practice, it’s very very close to previous simple RL approaches, that were remarkably using not that much data already.
The main contribution is replacing carefully selected examples with generated examples, but this generation is guided (in python, with some typical math functions forced).
It’s akin to replacing some manual tests with mutation testing.
Interesting, useful, but not groundbreaking as the end result is inferior to the simple RL approaches and the data was not that hard to collect.
It is an interesting approach to generalize to other domains where there might be less data available or less easy to curate
The CoT does seem pretty nonsensical. It might be an instance of vestigial reasoning: https://www.lesswrong.com/posts/6AxCwm334ab9kDsQ5/vestigial-... (not to promote my own blog post)
I agree Figure 32 is not that concerning - it just says that humans are not that intelligent, which is a little weird, but doesn't indicate that it's plotting against us. It's actually good that we can see this somewhat questionable behavior, rather than it being quashed by process supervision - see https://openai.com/index/chain-of-thought-monitoring/
<think>
Design an absolutely ludicrous and convoluted Python function that is extremely difficult to deduce the output from the input, designed to keep machine learning models such as Snippi guessing and your peers puzzling.
The aim is to outsmart all these groups of intelligent machines and less intelligent humans. This is for the brains behind the future.
</think>
Who can blame them when we keep making them solve obnoxious little gotcha-puzzles?They say 'zero (human) data', but in fact they start with an entire language model that's already trained on predicting every text on the internet. There's plenty of people writing about obfuscated code on there.
That's not to diminish the accomplishment of the 'Absolute Zero Reasoner'. It's just a bit more nuanced than 'zero data'. The abstract has a more nuanced phrasing than the title: "This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."
I'm assuming that I'm misunderstanding something, because this doesn't seem very novel?
Edit: Seems like a variant of adversarial training?
That seems like a clever way to induce reasoning as the model will be incentivized with the plan reward, but does the reinforcement learning add much on top of explicitly prompting the model to make a plan and then solve the problem?
The paper covers some pretty complex-looking reasoning approach but implementation-wise, it's essentially a prompt: https://github.com/LeapLabTHU/Absolute-Zero-Reasoner/blob/ma...
You could have models learning different specialities. One could play with Redis and only do that for example.
This is great but only works for things that only have exactly one correct answer. That is a very small portion of overall tasks. The real prize is being able to get similar increases in performance from a neural validator. This is currently challenging due to reward hacking.
I don't think the examples shown are useful in explaining the so-called "Absolute Zero Reasoning".
If only they could teach the robots that 6 balls != 10 balls...
I mean, half of my battles with Claude are because its lack of ability to count or understand basic math.
That aside,
"Despite using zero human-curated data, AZR achieves state-of-the-art results on diverse coding and math reasoning benchmarks, even outperforming models trained on large in-domain datasets. This demonstrates the potential for sophisticated reasoning skills to emerge purely through self-play without domain-specific supervision."
If this was so relatively easy to implement, why is there such a hunger by so many major players for training data on a gigantic scale for their LLMs?