Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All you get out of o1 is

    Reassessing directives

    Considering alternatives

    Exploring secondary and tertiary aspects

    Revising initial thoughts

    Confirming factual assertions

    Performing math

    Wasting electricity
... and other useless (and generally meaningless) placeholder updates. Nothing like what the <think> output from DeepSeek's model demonstrates.

As Karpathy (among others) has noted, the <think> output shows signs of genuine emergent behavior. Presumably the same thing is going on behind the scenes in the OpenAI omni reasoning models, but we have no way of knowing, because they consider revealing the CoT output to be "unsafe."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: