Usually, even if you use File Search of Assistant API or Knowledge of GPTs, it returns files names only as annotations. It’s not enough people to validate the answer.
To get the actual reference content, you can ask OpenAI to include it in the answer. However, I found that adding it to instructions is not enough. You should put it at the end of each query. For example, you can add below text after your query to include the actual reference content:
How long is the probation period?
Include references after the answer in this format: """
# References
**[Section Title Here](https://section-link-here.com/path)**
> Exact content body of the references
"""
In this way, you can have more reliable answer. I’d love to learn if you guys have more tips to make annotations work better!
According to the haystack papers, your concept might break down if you’re dealing with significantly longer outputs.
But also according to them, putting your schema instructions at the very very top of your system instructions should work just as well. I’m not convinced of that, but it’s something one could in theory keep in mind.
Thank you for your kind reply. I agree with you that the issue can be resolved in near future. However, from the industry perspective, we need something right now. I hope this approach can resolve some of the current obstacles.
Did you make in progress in developing a way for complete references? If so, how did you go about it? We are doing something similar and it would be interesting to compare approaches.