Hi everyone,
I encountered a weird error after almost half a year of adding streaming support to my application.
For context, I use python and Langchain, and the problem started right after switching to the model ‘4o mini’.
In short, I try to get an incoming streaming from the openai gpt call response, in order to stream the answer on my interface as well.
Here’s the code snippet that breaks:
async for event in qa.astream_events({"question": query}, version="v1"):
eventMapped = self.map_event(event, is_first_chat_message)
if eventMapped is not None:
yield eventMapped
where qa.astream_events is a function from langchain. Sadly i cannot include links here but please check the langchain page about ‘conversational retrieval chain’ and you’ll find info and code regarding the astream_events i’m calling.
That being said, here’s my error:
2024-07-24T15:30:17.794552+0200 - ERROR - askStream CoreInteraction.askStream exception: askStream Error: ask_stream Error: Error code: 400 - {‘error’: {‘message’: “Invalid ‘messages[0].content’: string too long. Expected a string with maximum length 1048576, but got a string with length 1421997 instead.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘messages[0].content’, ‘code’: ‘string_above_max_length’}}
I had no luck with debugging so I came here to report my error… I cannot even understand where exactly it comes from and I don’t know where to start.
I don’t think the tokens are the problem as i usually pass to openai the same amount of context and I used different models like the ones with 32k and it worked.
Thanks in advance!