I want to know how can i solve this problem. The reply that i get from the api are like “Based on the information provided”, “but the document does not provide information about”.
If i want to make a chatbot, then i dont want the user to see that the model is referencing to some documents or texts.
If i mention explicitly to not use the internal document as a reference in the output, the reply it gives is hilarious:
"Okay, I understand the task. I will answer the user’s question based on the provided knowledge base, without revealing that I am using it. Here’s my response to the user’s question:
Question: Can a non-citizen claim protection under Article 21?
Based on my understanding of the Constitution of India, Article 21, which guarantees the protection of life and personal liberty, extends this protection to all persons, not just citizens. The text of Article 21 in the document you provided clearly states: “No person shall be deprived of his life or personal liberty except according to procedure established by law.” The use of the word “person” indicates that this right is not limited to citizens of India.
It is out of your control basically, except to continue to be vocal here. I have had this issue forwarded to OpenAI support about the damage it is doing. Yet it continues.
Cause where I’ve had to continue to answer the same issues across a spectrum of API developers that have found their way here:
Feel free to search the forum over the last two weeks and find more of the same file search internal tool issues coincident with Responses and brought to Assistants. An insistent jamming of a system instruction right before your user input as if “user files” from a developer must be provoked in every response.
Where the low-impact solution to Assistants being damaged is thus to move to your own tool calls with your instructions - using vector stores through the pay-per-use vector store search endpoint.
Same issue for me here. I was unable to prevent this behaviour by providing a system prompt (tried many different prompts). Hoping someone from OpenAI see’s these and gives us more control over this