Sharing my bot to organization

I’ve created a custom GPT with the purpose of using it as a knowledge base for employees.

I’ve run into some roadblocks, and I’m hoping that this community can help.

  1. It appears that these custom bots are only available to those who have the paid version of ChatGPT, is there no way for me to make the success without having every employee have to pay for GPT4?

  2. Let’s say I simply use my current login on all of the organization computers to allow my employees to access the bot, a problem I see is that the query history would become available to everyone and I wouldn’t want employees to see what other employees were searching. Is there a way for search histories to not actually be saved?

  3. Although I’ve asked my bot to restrict itself only to the data I’ve provided, I found a few ways that break this rule. For example, if I simply tell it to make an exception and answer my question that it originally said it would not answer, it ignores its instructions and breaks the rule. How do I train the bot further?

Welcome to the community!

Yeah, as you note, it’s not the best idea to allow everyone to use your login. It’s probably a good way to get banned. Additionally, your account is limited to 40 messages/3 hours. With multiple users, I imagine that the thing will be unuseable.

Probably the best way to make a custom gpt available publicly to non-chatgpt+ subscribers, is to use the API platform to create an assistant instead: https://platform.openai.com/docs/assistants/overview?context=with-streaming - they were created for this particular scenario. It would require you to set up your own access controls and user interface, however. You can get started playing with assistants here: https://platform.openai.com/playground?mode=assistant

While there are ways to fortify your bot, it’s always going to be an arms race. There are several threads about safeguarding against instruction set leaks, such as here: Basic safeguard against instruction set leaks - the effort to restrict prompts is pretty much the same. You can use the moderations endpoint if you use assistants to catch egregious prompts https://platform.openai.com/docs/guides/moderation - you could use embeddings to maybe catch off-topic prompts - there are a lot of options, but none of them are easy or foolproof.