Is openai python client better than nodejs client?

Hi,
I am usually using nodejs client for openai to interact with open ai API.

Today I was using a prompt with nodejs and with various models (gpt-4, gpt-4o, 40-mini) and discovered something:

For a given array of messages (it was actually 1 message) send via the nodejs client and the python client, results are completely different and even “wrong” on the nodejs client.
I felt this way because I decided to try the same message on the openai playground and results where 100% equivalent to the python client.
Is there a know difference between the 2 clients?
Many thanks all for your feedback.

Both are just wrappers/SDKs for API calls from OpenAI’s OpenAPI specification, generated with Stainless.

There shouldn’t be any difference in response, as they’d both be making requests to the same places via the REST API.

3 Likes

Interesting. I’m curious then, why does the JS SDK support the realtime API and the Python client SDK does not?

I do agree with you but I tried the exact same prompt on both and results are literally different in a bad way where nodejs response is bad compare to the python one.
Do you think this could be an issue with the sdk?
I am pushing on the sdk instead of me because I tried the same prompt on playground and result match 100% with the python one so very annoying

could you share the sample source code for python/nodejs clients?

1 Like

Potentially invalid test. LLM responses are not deterministic.

Use logging to prove the actual REST calls are identical too.

I use a Ruby API and have no issues.

3 Likes

Highly unlikely that it is a SDK issue, because this is hasn’t been reported before. Unless we get a sudden influx of users reporting the same problem I suggest to dig into the code.
Maybe you are using different parameters for temperature or top-p. Maybe a different model?

1 Like