- how many parameters for it? (10B, 50B, or others?)
- how does it deployed?
I guess openai must have deployed many “gpt-5-nano” for many many requests from all over the world at the same time; they use the exactly same default paramters of function “client.responses.create” when deployed? Is there some chance for different default parameters when a same user call it by api many times with the same input? - Is it the same status of gpt-5-nano when “fig” input or not?
messages_1 = [{“role”: “user”, “content”: [{“type”: “text”, “text”: text_prompt}, {“type”: “input_image”, “image_url”: f"data:image/jpeg;base64,{bars_fig}"]}]
messages_2 = [{“role”: “user”, “content”: [{“type”: “text”, “text”: text_prompt},]}]
I call its api twice by input of “messages_1” and “messages_2” continously and I want to know whether the second would be impacted by the first call, such as different models or status or default params like this?
Hi @delbet_kk
Welcome to the developer community forum!
OpenAI doesn’t publish the parameter counts for GPT-5 or its smaller variants.
When you call the API, your request runs on shared stateless infrastructure, there is not a separate copy of the model for each user.
All users share the same default parameters unless you override them in your call. When two identical requests go to the same model, any output differences are just due to randomness in the output, both inherent due to the nature of computing clusters, the way prompts get packed into batches and executed, GPU clock timings, etc., and the deliberate “temprature” setting when it is greater than 0. (The latter is not an exposed parameter of the GPT-5 models)
The model doesn’t carry state between requests, so adding an image in one call won’t affect another text-only call, each is handled independently. However, if you include the prior inputs and outputs in your prompt message object (or let the system handle that for you) then those additional inputs and outputs will affect the latest input prompts generated output.
Hope this helps!