Assistant is hallucinating

I am trying to create a domain specific assistant for my internal project… My assistant is configured with gpt3.5-turbo-1106. It has two tools… One is a semantic search tool and answer retrieval tool. I have a very large embedding database , semantic search tool can be used for getting the relevant questions from the embeddings. Once it identified the top 5 searches assistant can choose few questions and can get their detailed data and information using the answer retrieval tool. This is the way my assistant should works… But my problem is that assistant is not so great when giving certain tasks which have some conditional statements… like

i need to find a mall which is having a movie theatre pvr… If it is close to my location less than 5km . Check the movie conjuring is there.

So this kind of conditional statements it is missing the statements. Why it is behaving like so?

You could try asking the AI to rearrange the query into a form suitable for function calling, this would be an additional API call to get that new version, then you can see if that improves things, isolating the user from direct access is often a great idea.

1 Like

can this be a separate gpt call or assistant itself can do this?. Most of the users will try to ask certain tasks with validations… mainly gpt or asistant need to think and need to break down the steps

Personally, I would use a separate gpt api call to keep control of what is accessed and done.

after all, how spliting it into separate functions. How to make the assistant to create separate message creation in steps. Currently it is dooing all the tool_calls and finally generating a msg creation. I want to do in every tool_call otput

Let me see if I interpret this correctly:

It sounds like you are exposing the multi-tool-call ability to the AI by the way you wrote your tool definition, and then the undesired behavior you are seeing is that the AI emits multiple tool calls at once.

If so, that should be avoidable by defining individual functions, so the AI doesn’t receive a specification that describes its ability to emit multiple functions at once. You then would have iterative step function-calling.

I do like the concept of a separate AI written specifically to perform optimum data retrieval, a specialized expert that only provides the final result to the assistant. Assistants currently have a high price penalty for iterations due to their unrestricted context length utilization, and they must necessarily have the instruction generalization of being a chatbot. The retrieval expert doesn’t need chat history to do the called task, and can do different language transformations for better searching.

My tasks are sequential and it has validation. Task2 may dependes on Task 1 result…. So if i create separate function for execution, it wont assist a user how it interprets the result for task 2.

Can you solved, this problem?