Thanks for making assistants. However, before I go building 50 of them I would like to know from an open ai employee if at all possible:
1. how long can you commit to supporting assistants?
The companies I represent are understandably leery about building out assistants if there is any reasonable chance they will not be supported and just dropped at some future time. Also that their data is not secure.
OpenAI, kindly advise!
This is an immature field (technically, not emotionally… uh, maybe both?) basing products around it with livelyhoods on the line is inadvisable right now. It’s not just openai, we are in the learning phase with a completely new way to make software.
Things will change a lot over the coming months, as they have been doing. This isn’t coming to terms with an advancement in software, it’s the first steps with a new paradigm. This is unknown territory, tread with care.
I agree and appreciate your sentiment in general.
As I tell my friends, never buy the first version of anything… Especially if Microsoft put it out.
But I’ve been burned by tailoring my prompts to the current models twice, and they do not react the same or better with new upgrades.
And I cannot stop building.
Thus, just trying to get some confirmation, however vain.
Also my clients need to know about the privacy thing.
The inconsistency thing is the big one, it will be with us for a while I feel. Thankfully it’s getting much better all the time.
I have an api assistant that can do some pretty wild stuff, but about 2/3 of the time the response is: “I’m sorry, Dave, I’m afraid I can’t help you with that.” - the rest of the time it performs miracles.
You (I) can’t release products with those kinds of things happening. As you say, if I spent days tweaking it for more reliability there might be a subtle change the very next day on the openai end that will make my approach no longer work.
I empathize. The privacy is tricky and that is for OpenAI to advise on.
As for building ;let’s build. It’s true that everything built will be RACEd* but that doesn’t mean you shouldn’t build but that you engineer in a modular way to make updates to your platform simpler not quite like but conceptually as if you can swap out a component in your PC.
*RACE 10/10/10 - Real-time antiquation of current ecosystems 10/10/10 rule. Whatever you build will be obsolete in 10 months, take a 10th of the time to be built by someone else, with a 10th of the resources. If you take this ‘rule’ into consideration in your design, then you will be more likely to adapt instead of 'aw shucks’ing it every time something new comes out.
“I’m sorry, Dave, I’m afraid I can’t help you with that.” - did you build HAL 9000?
Absolutely. This stuff feels like 1/100/100.
Great thing is that most of what I’m working on is just the logical next step, we all seem to be working on the same things. My new approach is that once I have proof of concept up and running and understand the issue completely, just wait a week or two (honestly…) and someone with better skills will have built a superior version of exactly the same thing. The beauty is that by having implemented it yourself you understand things deeply so using whatever tool floats to the top is just like using your own tool, but it’s nicer.
Function calling, for example. I managed to get system messages working that convinced gpt4 to use functions on my end and pass arguments. It would return a keyword to trigger web searching, document analysis, file operations, code cleanup, etc.
About 2 weeks after figuring out a nice approach then working through the issues, and it was working nice, the function calling updates were released. They worked identically in principle and the implementation was great. I just ripped out my stuff and pointed it at openai’s function calling. Not only was it easy to do, quality jumped dramatically.
It’s been pretty much like that ever since. Was hankering for a unified embedding space for my little models and 2 days later a great one that showed some insight into how that all works was released. The assistants runs are very similar to something I have been doing for ages, but so nice!
Everything in the playground/API is supposed to be private/my IP.
(There is also just AI as it is currently developed is only really good for X, and X is the problems we all want ti to do, so naturally there will be some overlap there, granted).
I would very much like some OpenAI employee to verify that playground is not trained on, not looked at by OpenAI / ie it is private, and we own the IP of our assistants we create there.
Gold in this statement. Thank you for this…
A great place to start looking for answers is the policy section of the OpenAI platform.
A must read for responsible developers and product managers.
Thx genuinely for the suggestion, but this is my feeling: I don’t want to spend 10 hours reading legalese i could not understand anyways that they can change whenever they like.
I simply am asking for n OpenAI employee to confirm here in writing: that playground is not trained on, not looked at by OpenAI / ie it is private, and we own the IP of our assistants we create there. on the openAI forum to support their customers of which I am one and have been since the private demo.
Not a big ask
Why not just upload the docs one by one and chat with them? Or create a summary with regards to the specific questions?
It’s really not wise to get legal advice from a hallucinating chatbot?
Is it really too much to ask of a developer to read the accompanying documentation?
I don’t know. Maybe? But why in this case? This is all really basic and standard stuff. If there are any specific questions that should be discussed then I am sure there will be lots of interest.
I should add that this is a good question:
how long can you commit to supporting assistants?
But it’s separate from the those already answered in the policy section.
As i said:
I am not a lawyer. I don’t want to spend 10 hours reading legalese i could not understand anyways that they can change whenever they like.
Also their legal statements are in no way binding to them they can change it anytime they like.
I simply am asking for an OpenAI employee to confirm here in writing: that playground is not trained on, not looked at by OpenAI / ie it is private, and we own the IP of our assistants we create there. on the openAI forum to support their customers of which I am one and have been since the private demo.
That touchpoint from a real employee will put me at ease and however vain is at least some indication that 1) assistants aren’t going anywhere anytime soon and 2) our data is safe / private.
That yes this is their corporate intentions vision, at least for the foreseeable future.
Also, so far, their lack of confirmation on this is also unfortunately telling.
Perhaps it is not safe to build our apps on Assistants?
OpenAI, please set me straight.
They won’t respond, but their policies make it clear:
As of March 1, 2023, data sent to the OpenAI API will not be used to train or improve OpenAI models (unless you explicitly opt in). One advantage to opting in is that the models may get better at your use case over time.
The playground is a GUI for the API and is included. They do retain your data for ~30 days only for abuse prevention. You can request that your data is deleted immediately as well.
They maintain a “trust portal” for deeper insights here:
Before I leave, I’ll drop this here from the FAQ
Do you store the data that is passed into the API?
As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy. Some endpoints offer zero retention.
Ronald I do genuinely thank you for your research and comprehensive response
But, again those policies can be changed at any time and is not really what I am looking for.
And given the employee changes we see at OpenAI at the top level… it is not too strange for a customer to be asking for a further commitment of their vision here.
I would really like an OpenAI employee to confirm here
1) the assistants feature is not going to be deprecated suddenly like certain models already have been wasting time and work.
2) the assistants feature, like the API and Playground you guys quoted which i already knew but i do thank you, is still going to be private, and my client’s IP is protected and not going to be used or looked at in anyway.
OpenAI’s not answering here is being taken as this answer: Policies aside, in actuality, No, the assistants are not promised to be around for any length of time (so develop on them at your own considerable risk). And no, your Data is not safe. Or at least no OpenAI employee can come on and further reassure that this corporate vision is still “the vision” and will continue on in anyway, much as we see top level vision and employee changes.
I think that data retention stuff exists because of enterprise customers. For a lot of reasons I think you can count on it being true.
The api just changed dramatically so be ready for stuff like that, but the concept of runs and steps is probably something that will be supported for a long time. Total guess, but it seems to be widely accepted as the right way to do things with the current state of llms.
You won’t hear from an openai employee. Not talking trash, just how it is.