How can I also print the o1 token reasoning? Is it available?
For example in this case:
response = client.chat.completions.create(
model="o1-preview",
messages=[
{
"role": "user",
"content": "Write a Python script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."
}
]
)
print(response.choices[0].message.content)
My understanding is that this is not available, due to concerns about propriety and possibly safetyâthe same reason that ChatGPT only shows an abstracted summary of the reasoning rather than the actual reasoning.
what Iâve heard is that it reveals âprivate informationâ, as in training data, that openai doesnât want competition using to train competitorâs models
Think of cybersecurityâthe details arenât revealed because it gives more power to malicious actors. Whether this is ultimately the ârightâ way is subjective, and while I would certainly prefer it wasnât necessary, that is the world I see
The flipside being that just like being unable to check sources this is also unable to check process making decision-making very broad.
Just because you havenât entered a parameter doesnât mean you wouldnât consider it looking at the process.
I am interested will models like 4o continue?
Will there be a CHOICE of one-shot logical query vs reasoned query? (I think this is basically the distinction?)
The more layers added to this in voice or reason or whatever add lots of weird âunnaturalâ bias to queries I think. Noise that youâd usually want to filter out not add.
If we can see the reasoning and thinking on the UI why canât we print it on the API? Iâm not understanding. Iâd love to be able to print the reasoning for an API response if thatâs possible?
Preciselyâin fact, you can sometimes see that the âthinkingâ is summarising a response behind the scenes.
The weird âquirksâ you see can also be enlightening, like the perspective changingâthe summary/thinking might change from referring to itself (e.g. âI am mapping outâŚâ) to referring to the âassistantâ (e.g. âThe assistant isâŚâ)âor you can see a disconnect (e.g. a misunderstood word or abbreviation, or, as I have had, suddenly switching languages midsentence), that is mysteriously resolved between the âthoughtsâ and the actual response. These show that there is a lot more going on than we get to see.
lack of clarity is an informaiton, lets imagine that you dont need to have much of fine tunning to get good resuls, maybe smart promtps, for given task;), Imagine a model that can asigng prompts in chains of, models for given query and execute chain, if you were able to see hiden prompts, you would be able to back engeener that stuff