Hello team,
I’m working on a C# application using the OpenAI SDK (Azure). I noticed that the JSON mode was released on Azure in Python. Is it available in the .NET SDK as well, and if so, how can I access it?
Thank you a lot for your reply
Hello team,
I’m working on a C# application using the OpenAI SDK (Azure). I noticed that the JSON mode was released on Azure in Python. Is it available in the .NET SDK as well, and if so, how can I access it?
Thank you a lot for your reply
The “JSON mode” is more experimental and unreliable than it is an actual mode.
By specifying a single new response format parameter in API calls (but the same gpt-4-preview-1106 model specification), one can get results from an AI model trained to generate JSON without as much instruction being needed.
However, it is not any guarantee of getting validated JSON output, and in fact, without still having instructions that provide the json format specification the AI is to produce, you will get useless repeating tokens out of the model.
Hi, thanks for getting back to me. Do you happen to know the process for utilizing the .NET SDK?
In Python, it appears as follows:
model = "gpt-4-1106-preview",
response_format = { "type": "json_object" },
However, in C#, I’m unsure about how to implement the response format ?"
you mean https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.OpenAI_1.0.0-beta.9/sdk/openai/Azure.AI.OpenAI this one? I can’t find any of the recent features~ their last update was same date as devday but I can’t find any references to the new assistant api and stuff
Alright, thank you.
I’ve also posted the question on other forums. If I receive an answer elsewhere, I’ll update this forum with the correct response
You can see the actual http request bytes sent to the API below. b’xxx’ is the Python container for bytes.
– message content –
{
“msg”: "Thank you very much! Your kind words are greatly appreciated. If there
– usage –
{‘completion_tokens’: 20, ‘prompt_tokens’: 31, ‘total_tokens’: 51}
remaining-requests: 4999
– API request –
b’{“messages”: [{“role”: “system”, “content”: “You are an administrative assistant. you respond with json using the key 'msg'.”}, {“role”: “user”, “content”: “Happy assistant day.”}], “model”: “gpt-4-1106-preview”, “max_tokens”: 20, “response_format”: {“type”: “json_object”}}’
I opt for an HTTPS request, not the SDK, to handle the “solution.”
…and now you can see what that fully-formed request looks like as written by OpenAI.
https://api.openai.com/v1/chat/completions
And throw in some headers.
Headers({‘host’: ‘api.openai.com’, ‘accept-encoding’: ‘gzip, deflate, br’, ‘connection’: ‘keep-alive’, ‘accept’: ‘application/json’, ‘content-type’: ‘application/json’, ‘user-agent’: ‘OpenAI/Python 1.3.4’, ‘x-stainless-lang’: ‘python’, ‘x-stainless-package-version’: ‘1.3.4’, ‘x-stainless-os’: ‘Windows’, ‘x-stainless-arch’: ‘other:amd64’, ‘x-stainless-runtime’: ‘CPython’, ‘x-stainless-runtime-version’: ‘3.11.5’, ‘authorization’: ‘[secure]’, ‘x-stainless-raw-response’: ‘true’, ‘content-length’: ‘274’})
Here is the response from a member at Microsoft who works on the product team : "The current version of the client library does not yet have support for Json mode, but we are actively working on it and should be able to release an update soon "