Create a vehicle Sales Assistant using Assistant API

Hello, We have been trying to create a vehicle sales assistant using OpenAI Assistant API. The unique part of our assistant is that we also change the left side screen as per the context of the conversation. So for example ; If someone has asked “can you give me more details about XYZ vehice”, then the assistant will throw a function call “show_vehicle_details” with XYZ as parameter. This function will change the screen to show XYZ vehicle.

We would like to give this experience to our B2C customer, but sometimes the assistant hallucinate and doesn’t call the function at the right time or call some other function which is breaking the flow.

Can someone help, how can we achieve this experience ?

regards

can you give a sample conversation flow when the assistant hallucinates?

It will be hallucinating once in a while no matter what you do because Assistants API is still in beta and is both not production ready yet and changes over time. You either have to accept it or use other agentic frameworks or maybe even use LLM APIs directly.

Thanks for responding.
In the journey we have a comparison screen that shows comparison between 2 vehicles (Lets say X and Y). The screen is displayed using the function call “compare_vehicles”.
Now after that if i ask “which one is more eco-friendly ?” the system calls another set of function calls “show_vehicle_details” twice one for X and another for Y.
Now since the last call is for “Y”, so it always shows the screen of vehicle “Y” which is not correct.

Thanks. I think one of your problem is that you are not passing the result of the tool to the API for summary. Check this sample flow:

“show_vehicle_details” changes the screen . so what if Audi Q3 is more ecofriendly. Since RAV4 is being called later, so it will always show RAV4 on the screen.
I would like to control the screens as per the conversation context.

that’s the problem. it should not. the next API call (summary) should be the one to decide to change the screen. tool calls should be in the background.

user inquiry
tool_call invoked
submit tool output to summary API
summary API output will decide to show which screen
1 Like

Thanks.
How the Summary API will decide which screen to be shown ?
Would be great if you can give an example.

Thanks Tony, What are other LLM APIs available to fulfill this scenario ?

What I meant is that there is an option to use lower level API’s (compared to Assistants API) like OpenAI chat completions. Lower level = more control (but more coding).

Thanks Tony,
We initially tried to use the chat completion API but unfortunately when the conversation goes lengthy, OpenAI forgets the context and started hallucinating.
We even tried prompt chaining but it didn’t work very well.

In our Sales process, we have below steps:
1.) Greet Customer
2.) Collect customer preferences
3.) Show relevant vehicles basis preferences
4.) Configure Vehicle by selecting relevant colour
5.) Book test drive appointment.

In this whole journey we also have to change the screens according to the context of the conversation.

Any suggestions, how can we achieve it.

regards

when the conversation goes lengthy, OpenAI forgets the context and started hallucinating.

Assistants API is no better here. In fact it is worse, because in Chat Completions you fully control the context, while in Assistants API you have very little control.

Without seeing some chats chats and the code I cannot guess what could be the problem here. So the best thing is you identify specific problem and ask question about it.

Well, in our case, its vice versa.
The assistant API performs better than chat completion, but not fully.
I perhaps will not be able to share the code here… but if there is any way to connect personally i can share it.
Please let me know if this works.