Enterprise review for custom GPTs

I’m part of a team currently deliberating the introduction of custom GPTs in our enterprise environment. We’ve been cautious and have not allowed third-party GPTs so far, but we’re actively exploring this possibility. I’m seeking the community’s insights on a few specific concerns:

  1. Initial Approval Process: How do you manage the approval of custom GPTs in your organization? Are there particular criteria you prioritize? I’m currently reviewing each by hand, and assessing their privacy policy if provided.

  2. Data Security Concerns: Our major worry revolves around GPTs that might use external services or APIs, raising the risk of inadvertently exposing sensitive data. Do you have a good process in place to mitigate such risks?

  3. Automated Re-Flagging for Re-Approval: The critical query we have is about updates to approved GPTs. Is there an automated mechanism or process in place in your organization or the enterprise solution that re-flags a GPT for review if it undergoes significant changes, like adding new data connections or APIs? How do you ensure that these updates are compliant with your security and privacy standards?

I didn’t see any other threads that touched on the matter, so I wanted to get the community’s take on it. Looking forward to the discussion!

1 Like

re: data security
I would not use external APIs unless the 3rd party on the receiving end of that call is fully vetted

4 Likes
  1. That would be a question you would need to ask your higher-ups to see what they are prioritizing themselves.

  2. Custom GPTs are encouraged to do this. Mitigating any risks would involve either building GPTs in-house or waiting for this “market” to mature.

  3. You vastly overestimate any metrics or standardizations in this regard when it comes to custom GPTs. There is no security standard in place, because GPTs themselves are inherently vulnerable. There is no communication channel between user and builder as to when and if the custom GPT changes in any way. There is no foundation for any some such process to occur.

custom GPTs are not what I would consider “enterprise-grade” yet, and likely won’t be for some time.

That’s the challenge. They’re coming to me for guidance on how we go about approving and reviewing a given GPT’s 3rd party plugins, and without the proper tooling from the “public marketplace,” it’s tough to do.

I believe the maturing of the market is ultimately what is needed, but I wanted to post the question to see if anyone had some clever workarounds or processes they’re using.

If you have any suggestions or care to share any insight on how you’re ensuring API endpoints are “fully vetted” in your own environments, I’d appreciate it!

Well, who developed those web services that you are setting up as actions?
Did you? Then you’re good.
Are you using some else’s web service? Then you better trust them because you are going to send them some data and you don’t even control what exactly… the GPT will.

You know, I think if it were me, my big thing I would ask them would be “Okay, so what exactly are you wanting to get out of GPTs? Is there something you think (or see) that would be useful here? Should we consider buying GPT Teams instead?”

Clearly, they seem to be interested in something. If they want to pick it up because it’s the newest, hottest thing right now, then there’s not going to be much to relay back to them because they don’t know what they’re looking for. If you were able to coax them for feature or functionality requests, that may provide a better clue on how to further advise them on this.

1 Like

The organization’s risk profile will better inform risk management in this case. From the assessment of IT risks, especially their use. As a general rule, if you are concerned about sensitive information, it is best to keep it safe. Do not share it with outsiders as knowledge. Also, consider the level of your organization. How famous is your organization? The risk must be more. Even if you don’t include important information. Such as customer service, intentional malfunctions and reputational attacks can also occur. Reputation is even more important than some information.

If it is important information There are still some alternative implementations for GPT to manage. But we should not forget other implications for use in organizations.

1 Like

I would put on my auditor hat and think from this perspective.

What are the requirements you have for a regular supplier?
You can easily get most, if not all, of the required documentation from OpenAI. But for each GPT there is no way around contacting the provider and see what type of documents they can come up with, for you to evaluate if this is going to meet the specific requirements. Since this approach could potentially cover your first two questions it may work. At least you have something to communicate why the decision to (not) whitelist a GPT has been met.

Considering number three: using only the store you cannot know if there have been changes to a GPT, which version is currently deployed and how the changes affect your assessment of the requirements.
Again, usually, this is done by the supplier communicating these changes. But I don’t see any automation at the current point in time.

Taking a step back and considering again: if a GPT piques your interest but there are administrative or legal requirements to be met, then this has to be managed with the supplier directly.

Hope this helps, somewhat.

3 Likes

First off, a huge thank to everyone for all the insightful input. It’s been really valuable.

Reflecting on the discussions, seems like this aligns closely with my initial hunch. I believe the integration of custom GPTs in enterprise settings is more about aligning with an organization’s risk profile and its risk management methodologies, rather than being primarily an IT methodology concern.

Sounds like it’s about the requirements and standards we hold for any technology partner, rather than focusing solely on the perspective of the tech team.

2 Likes