If I want an index with continuous writes and deletes, is that possible under the current API? From the docs the only possibility is to create a new
File with the updated state using the
files endpoint, and delete the old
I think it would be very convenient to have update methods for the
file endpoint to address this use-case. Is this possible for OpenAI to implement?
That’s great feedback. It’s on our backlog, but not one of the top priorities. I agree it’d be very useful.
What’s your use case?
For example, searching over chat logs in a session, or some index based on data the user is continuously updating.
Thanks, these are valuable use cases.
I’d like to add my support and encouragement to add this feature. In our use case we have a large FAQ that’s updated constantly through a wiki interface, and it feels very wasteful to re-process the entire knowledge base just to add or update a couple of documents. As it is now we have to choose between using far too many tokens, or working with stale data.
@robertskmiles Without a timeline it’s unclear when the feature will be implemented. If you want a (potentially better) solution, it may be worthwhile pairing GPT3 with your own retrieval model, which gives you full control over the search API. DM me and we can go over some of the implementation details.