Is there an way to access my assistant by others?

I am writing a scientific paper, and I would like to make available the assistant I have created, including the file I have attached to it. I know I can access it using my api key.

If I give the assistant id to a person, would they be able to also use my assistant as I have set it up? I want to make this id available on the paper as so other can replicate my findings.

1 Like

Assistants would need to be hosted somewhere and ran as an application, the assistant id does not link to a persistent object that people can access outside of your organisation.

1 Like

@Foxalabs So do I understand well that we would need to create a whole infra and front-end in order for the other to use the assistant? Would you have any recommendation of what to use?
I will anyway give a shot a try to create my own then.

1 Like

That’s the main use case for an API. And usually the rest of the architecture exists already, you are just plugging AI in to supplement and enhance.

If you want something simpler look at GPTs?

1 Like

There are several low-code/no-code platforms that you can explore. I think the idea is to separate the back end from the front end and create experiences for your users.

Even though you have a scientific-niche solution, you can build an experience for your users.

Now, if you want a straightforward solution without complete control of the pieces you want to put together, then a GPT is the way, at least for now.

1 Like

Thanks for the reply. I am looking for something simple, not interested for this paper in creating an interface. When I want to access my assitant, it is simple: I just need the id. I was wondering if I could also just give the id on my paper, It seems not. GPTs are not available for me yet.

If I create an assistant, could other people use it with their own OpenAI API key or ChatGPT+ account?

Thank you, @CinematicDev. I’m developing a “free” app intended for educational use, and I just need to pass on the GPT usage expenses to the end users. Implementing a user management and billing system solely for this purpose would be an overly burdensome task for me but also inconvenient for the users.

I hope the Assistants API can support the “bring-your-own-api-key” model, similar to the current Chat/Completion APIs. Alternatively, perhaps an Assistant can be made available through the GPT store for paying ChatGPT users?

1 Like

If you want to distribute it as an EXECUTABLE for Windows, you could do so using the next version of the Smart-AI-Package Robot

The trick with this is, that the user can take your executable, and take his own API-Key.
The user will just write his API-Key into a Text-File and put this file aside of the executable.
Also my experiences with the new Assistants is that they are not really reliable for technical use, while thy seem to work fine for Chat-like Conversations.
Trying to teach them programming rules failed, even with just 60 kb of “rules” the outcome was not satisfying.

You can instead use the Chat-Completion API, and make several separate Agents with “Own history” (History in an Array) feed this to the chat completion API. Sort Keywords before sending them to the AI using the SPR-Commands. Which possibly works better and more reliable then current GPT’s in my tests.

1 Like

@theo.gottwald Currently, a GPT Assistant can only be accessed via an API key from the same organization. So a user’s API key won’t work no matter the distribution mechanism.

I agree that the Assistants API is still in its infancy, but I like the technical direction it represents, namely a complete LLM runtime hosted by OpenAI.

Sounds interesting your findings.
Suggestion: make a publication, even just preprint.
I am publishing one now, using my finding on the latest openAI APIs.

The whole GPT’s thing goes a little bit into the direction of fine-tuning models, but they tried to do it somehow without that you have to really fine-tune something. I assume that they thought maybe it’s okay if they can use the large context window of 128 kToken for that.

As said, in my tests I have found out that this works very well as long as you are doing chat-like conversation, but it seems to be unreliable if you are using precise rules for something.

For example, I have been modeling the script language I am using for the Smart Package Robot and the results were not satisfying.

My idea goes into a different direction. Assume you know the chat model, then you know you can transfer an array with dialogues to the chat model.
During such a dialogue you have the chance to let it learn defined not too complex topics and you also have the chance to define the output in a rather easy way by defining the answers that you can send with the chat array.

Doing so, you can make not just one assistant, but you can take some expert knowledge, break it into defined pieces and make an expert for each of these pieces and each of these experts has the full 128k context window.

Finally, there must be one instance who is coordinating all the results and putting them together and that’s my idea how to solve these problems that, in my opinion, can currently not be done very good using the GPTs because they seem to be a little bit weak in delivering rule-based, defined results.

I did not yet test this so I can say more and maybe we’ll make a video about it on YouTube when my tests are done.

#GBT fine-tuning #chatmodel #contextwindow

1 Like

interkanect.com lets you create your own assistant and attach files when you sign up for an account. You can then give out your interkanect page url to anyone you want to have access.