We are using the API with GPT4 and experiencing some bleed over with API calls. Let’s say we make 3 calls all within a second or so of each other. 2 may came back fine, but the 3rd comes back with part of the result for 3 and part of the result for 1.
Anyone experience this and know of a fix?
I had an app with GPT-3.5-turbo model where I would include the language to respond in. I was testing: Request 1, return in French. Then in request 2 (with no message history), I requested to return in English. It returned the response in French again. This memory was a consistent problem so I ended up pulling the multi-lingual support for the app.
1 Like
My search plugin spawns up to 10 requests.get in parallel (and then keeps spawning more as those come back, and follows each with a gpt-3.5 call as soon as the get response comes back, so potentially 10 or more gpt-3.5 calls in parallel
I’ve never seen bleed from one to another. Not saying it couldn’t happen.
(Of course I’m not sure I’d know, I don’t review every response in detail. However, whenever I follow the returned link, the page seems to match the summary…)
1 Like
This is likely a bug in your code caused by inconsistent message history state synchronization / management.
Except the only message history state we are using is writing to a log file for debugging purposes.Each post to the API is a new call which contains only that specific message.
likely an issue in your code, using Node and streaming on?