Assistant Imagine Variables

Hello, I need help with my assistants implementation.

I want the assistant to be able to consume data from our web where we have users information. So I coded a function that returns that specific info and the only thing that the assistant have to do is call it based on the account_id. The problem is that it requires the message to specificaly have the account_id in it. If the user does not specify the id in the message it will consume the data from a random account instead of asking the user for the correct id.

User: I want to know my account balance
Assistant: getBalances_function({“account_id”:625}) #Note that 625 is not the user´s account
Assistant: Your account balance is …

User: I want to know my account balance
Assistant: I need your account_id
User: It is 65432
Assistant: getBalances_function({“account_id”:65432})
Assistant: Your account balance is …

Does somebody knows how to fix this?

  1. You can solve this with a prompt inclusion:
  • "Always ensure that balance is fetched and function is called after confirming the account_id of the user"
  1. From security standpoint this is still weak, as other user balances can be accessed with random ids.

Hope this helps !

1 Like

Thanks for answering.
I didn´t actually copy all of the backend code, there is an authentication phase before this so the user can only get his account/s ´s information.
About the prompt topic. I did actually specify in the assistant´s instructions that it needs to asks for the account if it´s not provided but it still calls the function even if the user didn´t specify one and invents a number for the id.

Thats some hallucination, on giving it more thought how about:

  1. Asking for Account Id as the first dialogue.

  2. Writing another validation function to check if id exists, maybe even confirm via OTP it belongs to said user. This if already handled by auth, step 1 made compulsory should solve this for now.

This has my interest, do keep thread posted with your eventual solution. Meanwhile GPT 5 should be smarter with less hallucinations, at least that is the rumour :slight_smile:

I’m adjusting the instructions like you say, and I’m seeing some progress on the playground. Now, most of the time it actually asks for the account before calling the function.
I’m using GPT 3.5 for budget reasons, I haven’t tried the assistant with GPT 4 yet. Instead of upgrading the model, I would ask the devs if there is a way to maybe load custom instructions for the functions. I think it would be a great addition and would also make the model cheaper to use.

1 Like

The JSON schema specification for a function that you provide the model are the “custom instructions”. The main description can be quite extensive and multi-line.

A description directly on a property is also helpful, such as “This data is required, and the function cannot be used unless the user has directly provided this information in one of their user chat messages” can do some direct curtailing of function calling without the data to properly complete the call.

Once the AI has decided to invoke a function call with the first token it emitted, it can’t backpedal and say “oops, data missing”. An idea I just came up with would be an optional last “error” boolean property as a last one in the spec “Only set this error condition to True if the function you just wrote has problems with the data not matching direct user input”.

(the internal functions of Assistants also could use better ability to customize the instructions that OpenAI places.)

I think I´ve found the problem. So, the assistant is being used in whatsapp, in order to have the user´s information and be able to chat with more custom replies, I´ve added “metadata” in the message that gets send to the assistant. So instead of just being “Hey, I want to know my account´s information” it is “Hey, I want to know my account´s information {‘metadata’: {‘wa_id’: 5423426675, ‘name’: ‘Guido Bruera’}}”.
It seems that this metadata is what is confusing the assistant. I´ve tried in the playground with both cases (with and without metadata) and it works fine when you don´t add it. The problem is that I really need this information so I was wondering if there is a way so the assistant can use/access it without it being specifically in the message.

Interesting suggestions @guido.bruera , sure to ponder on this a bit more. Custom instructions per function call would open a gateway for many possibilities, I agree.

1 Like

I manage to solve this. If anyone is having the same problem, consider removing that extra information from the prompt and add it to the assistant additional instructions. Like this:

    run = client.beta.threads.runs.create(,,
        additional_instructions=f"The user name is {name} his phone number/wa_id is {wa_id}"