Here is the issue I am experiencing;
I am trying to create a custom GPT for our internal team to validate data returned by an API through Actions. The user (internal team-member) uploads a document with information about work being performed. The GPT then queries an API to pull data to validate the information uploaded by the user.
The validation by the GPT is working as expected, however if the GPT notices an issue and requests the user to fix it in an external system (where the API is getting the information from), the user can just tell the GPT they updated the information and the GPT accepts it and moves to the next step.
I have tried including several bullet-points in the GPT’s instructions to not trust user messages and only rely on the data returned by the API. However, if you pester the GPT enough and just flat-out lie, it accepts the answer from the user over the API, even though the API data does not match the user’s statement.
Does anyone have any suggestions on instructions to disregard user’s lies, and only rely on data returned by Actions?