How to print o1 reasoning token

How can I also print the o1 token reasoning? Is it available?
For example in this case:

response = client.chat.completions.create(
    model="o1-preview",
    messages=[
        {
            "role": "user",
            "content": "Write a Python script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."
        }
    ]
)
print(response.choices[0].message.content)

My understanding is that this is not available, due to concerns about propriety and possibly safety—the same reason that ChatGPT only shows an abstracted summary of the reasoning rather than the actual reasoning.

2 Likes

Surely more transparency = more safety?

1 Like

what I’ve heard is that it reveals “private information”, as in training data, that openai doesn’t want competition using to train competitor’s models

1 Like

Think of cybersecurity—the details aren’t revealed because it gives more power to malicious actors. Whether this is ultimately the “right” way is subjective, and while I would certainly prefer it wasn’t necessary, that is the world I see :wink:

The flipside being that just like being unable to check sources this is also unable to check process making decision-making very broad.

Just because you haven’t entered a parameter doesn’t mean you wouldn’t consider it looking at the process.

I am interested will models like 4o continue?

Will there be a CHOICE of one-shot logical query vs reasoned query? (I think this is basically the distinction?)

The more layers added to this in voice or reason or whatever add lots of weird ‘unnatural’ bias to queries I think. Noise that you’d usually want to filter out not add.

1 Like

I agree. Perhaps, as competitors (and/or open-source) catch up, the need to obscure will diminish, but who’s to say at this point?

I mean, consider this: even community members here aren’t willing to be fully transparent, lest they lose their perceived “edge”.

2 Likes

… and this is how I think it should be, for o1 too…

Reason holds broader responsibility and often responsibility outside of scope.

This emphasises weight on perspective/scope which makes sense.

I think the need to obscure will ‘evolve’. I think clarity is something you work for.

1 Like

If we can see the reasoning and thinking on the UI why can’t we print it on the API? I’m not understanding. I’d love to be able to print the reasoning for an API response if that’s possible?

But you don’t. You don’t see all the tokens on ChatGPT that are being iterated with. There’s a lot of background processing you don’t see.

1 Like

Precisely—in fact, you can sometimes see that the “thinking” is summarising a response behind the scenes.

The weird “quirks” you see can also be enlightening, like the perspective changing—the summary/thinking might change from referring to itself (e.g. “I am mapping out…”) to referring to the “assistant” (e.g. “The assistant is…”)—or you can see a disconnect (e.g. a misunderstood word or abbreviation, or, as I have had, suddenly switching languages midsentence), that is mysteriously resolved between the “thoughts” and the actual response. These show that there is a lot more going on than we get to see.

1 Like

lack of clarity is an informaiton, lets imagine that you dont need to have much of fine tunning to get good resuls, maybe smart promtps, for given task;), Imagine a model that can asigng prompts in chains of, models for given query and execute chain, if you were able to see hiden prompts, you would be able to back engeener that stuff