Loss of GPT 3.5 model context via API-Langchain

Good morning everyone, I hope you are very well.
I’m making a bot with embeds via langchain. And I’m interested in the context retention part, since it’s one of the most striking and most useful parts of the model.
However, I am realizing that the bot has problems in this aspect, for example I did this simple test:

Human: We are going to start a list of numbers and you will continue. Let’s start at 5. What’s next?",
GPT: The next number after 5 is 6.
Human: and after that which one is next?
GPT: I can’t answer your question without more context. What do you mean “that”? Please provide more information so that I can help you more precisely.

I’m almost sure it’s due to the condense_question_prompt parameter of langchain’s ConversationalRetrievalChain.from_llm function, since it takes care of rephrasing the user’s question based on the Chat context. However, this reformulation does not always look good, since sometimes it does not express the context correctly. Continuing with the previous example, there are times when the reformulation is: “After that, what’s next?”, instead of: " What number follows after 6?" What would be the correct way to reformulate the question so that it correctly covers the context. I have also made different prompts for the condense_question_prompt that indicate more emphatically to reformulate the question with the context, but this means that when the topic is changed in the conversation, the Bot forces the conversation with the original topic, so it is as if it always dragged the initial topic , and it doesn’t end up looking good either.
And this is usually a common thing. So my question after this. Is there a prompt that allows a reformulation of the question more appropriate to the context?, or some other way to create a model with embeddigns, which better retains the context.
Beforehand thank you very much

For interest I tried this with my agent and it gave me this output:

This is not langchain but very similar. It is not just a simple back and forth with the API, this agent has access to tools if required (actually it’s using Open AI functions).

The key thing here is this is GPT 3.5.

I think something is wrong with your Langchain setup, not GPT 3.5.