Thanks for making assistants. However, before I go building 50 of them I would like to know from an open ai employee if at all possible:
1. how long can you commit to supporting assistants? 2. what is the privacy policy on data, interactions and files uploaded to assistants?
The companies I represent are understandably leery about building out assistants if there is any reasonable chance they will not be supported and just dropped at some future time. Also that their data is not secure.
This is an immature field (technically, not emotionallyā¦ uh, maybe both?) basing products around it with livelyhoods on the line is inadvisable right now. Itās not just openai, we are in the learning phase with a completely new way to make software.
Things will change a lot over the coming months, as they have been doing. This isnāt coming to terms with an advancement in software, itās the first steps with a new paradigm. This is unknown territory, tread with care.
The inconsistency thing is the big one, it will be with us for a while I feel. Thankfully itās getting much better all the time.
I have an api assistant that can do some pretty wild stuff, but about 2/3 of the time the response is: āIām sorry, Dave, Iām afraid I canāt help you with that.ā - the rest of the time it performs miracles.
You (I) canāt release products with those kinds of things happening. As you say, if I spent days tweaking it for more reliability there might be a subtle change the very next day on the openai end that will make my approach no longer work.
Josh
I empathize. The privacy is tricky and that is for OpenAI to advise on.
As for building ;letās build. Itās true that everything built will be RACEd* but that doesnāt mean you shouldnāt build but that you engineer in a modular way to make updates to your platform simpler not quite like but conceptually as if you can swap out a component in your PC.
*RACE 10/10/10 - Real-time antiquation of current ecosystems 10/10/10 rule. Whatever you build will be obsolete in 10 months, take a 10th of the time to be built by someone else, with a 10th of the resources. If you take this āruleā into consideration in your design, then you will be more likely to adapt instead of 'aw shucksāing it every time something new comes out.
Great thing is that most of what Iām working on is just the logical next step, we all seem to be working on the same things. My new approach is that once I have proof of concept up and running and understand the issue completely, just wait a week or two (honestlyā¦) and someone with better skills will have built a superior version of exactly the same thing. The beauty is that by having implemented it yourself you understand things deeply so using whatever tool floats to the top is just like using your own tool, but itās nicer.
Function calling, for example. I managed to get system messages working that convinced gpt4 to use functions on my end and pass arguments. It would return a keyword to trigger web searching, document analysis, file operations, code cleanup, etc.
About 2 weeks after figuring out a nice approach then working through the issues, and it was working nice, the function calling updates were released. They worked identically in principle and the implementation was great. I just ripped out my stuff and pointed it at openaiās function calling. Not only was it easy to do, quality jumped dramatically.
Itās been pretty much like that ever since. Was hankering for a unified embedding space for my little models and 2 days later a great one that showed some insight into how that all works was released. The assistants runs are very similar to something I have been doing for ages, but so nice!
Everything in the playground/API is supposed to be private/my IP.
(There is also just AI as it is currently developed is only really good for X, and X is the problems we all want ti to do, so naturally there will be some overlap there, granted).
I would very much like some OpenAI employee to verify that playground is not trained on, not looked at by OpenAI / ie it is private, and we own the IP of our assistants we create there.
Thx genuinely for the suggestion, but this is my feeling: I donāt want to spend 10 hours reading legalese i could not understand anyways that they can change whenever they like.
I simply am asking for n OpenAI employee to confirm here in writing: that playground is not trained on, not looked at by OpenAI / ie it is private, and we own the IP of our assistants we create there. on the openAI forum to support their customers of which I am one and have been since the private demo.
Is it really too much to ask of a developer to read the accompanying documentation?
I donāt know. Maybe? But why in this case? This is all really basic and standard stuff. If there are any specific questions that should be discussed then I am sure there will be lots of interest.
I am not a lawyer. I donāt want to spend 10 hours reading legalese i could not understand anyways that they can change whenever they like.
Also their legal statements are in no way binding to them they can change it anytime they like.
I simply am asking for an OpenAI employee to confirm here in writing: that playground is not trained on, not looked at by OpenAI / ie it is private, and we own the IP of our assistants we create there. on the openAI forum to support their customers of which I am one and have been since the private demo.
That touchpoint from a real employee will put me at ease and however vain is at least some indication that 1) assistants arenāt going anywhere anytime soon and 2) our data is safe / private.
That yes this is their corporate intentions vision, at least for the foreseeable future.
Also, so far, their lack of confirmation on this is also unfortunately telling.
Perhaps it is not safe to build our apps on Assistants?
They wonāt respond, but their policies make it clear:
As of March 1, 2023, data sent to the OpenAI API will not be used to train or improve OpenAI models (unless you explicitly opt in). One advantage to opting in is that the models may get better at your use case over time.
The playground is a GUI for the API and is included. They do retain your data for ~30 days only for abuse prevention. You can request that your data is deleted immediately as well.
Before I leave, Iāll drop this here from the FAQ
Do you store the data that is passed into the API?
As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy. Some endpoints offer zero retention.
Ronald I do genuinely thank you for your research and comprehensive response
But, again those policies can be changed at any time and is not really what I am looking for.
And given the employee changes we see at OpenAI at the top levelā¦ it is not too strange for a customer to be asking for a further commitment of their vision here.
I would really like an OpenAI employee to confirm here
1) the assistants feature is not going to be deprecated suddenly like certain models already have been wasting time and work.
2) the assistants feature, like the API and Playground you guys quoted which i already knew but i do thank you, is still going to be private, and my clientās IP is protected and not going to be used or looked at in anyway.
OpenAIās not answering here is being taken as this answer: Policies aside, in actuality, No, the assistants are not promised to be around for any length of time (so develop on them at your own considerable risk). And no, your Data is not safe. Or at least no OpenAI employee can come on and further reassure that this corporate vision is still āthe visionā and will continue on in anyway, much as we see top level vision and employee changes.
I think that data retention stuff exists because of enterprise customers. For a lot of reasons I think you can count on it being true.
The api just changed dramatically so be ready for stuff like that, but the concept of runs and steps is probably something that will be supported for a long time. Total guess, but it seems to be widely accepted as the right way to do things with the current state of llms.
You wonāt hear from an openai employee. Not talking trash, just how it is.