ChatGPT Team is un-secure for businesses and organizations without some form of Invitation Control

ChatGPT Team, a self-serve subscription plan designed for organizations and businesses wishing to adopt ChatGPT for use among their teams! - OpenAI Support Bot

We are a smaller company with slightly less than 100 members. We have been trying to get in touch with OpenAI about Enterprise since September but have not received any response. Since October, we have been reimbursing employees’ ChatGPT Plus subscriptions. We have been huge proponents and users of OpenAI’s services at our organization.

When the ChatGPT Team subscription launched, we felt seen. Finally, we could get our employees into a safe space where conversations were not used for training, and Custom GPTs could be private to our organization and finally be useful because they could contain company information that couldn’t be extracted by malicious actors who found the links.

This was the Enterprise experience we were waiting for. Due to our size, this is the only Enterprise option that we have, and I imagine we are not the only organization in this class.

After paying for the first 40 seats at the annual rate, we discovered that all members can invite an arbitrary number of new members. Not only this, but any member can obligate the Team to pay for more seats, pro-rated, and neither Admins nor Owners can prevent this. These costs appear to not be easily refundable.

I’m sorry for the strength of my language, but this is completely absurd.

There are two major security and billing issues with this:

  • Malicious actors are provided an unauthorized access vector to Custom GPT knowledge via every member.
  • Malicious or non-malicious actors can generate unauthorized billing costs by inviting new members, even outside the range of baseline seats.

These vulnerabilities are incredibly easy to abuse and impossible to protect against. Adding new members can even be accomplished via CSV.

Because ChatGPT Team is the only option available for our organization, there needs to be some form of invitation control. This design choice is so unexpected for a company like OpenAI. No organization our size can ethically use this product without this feature.

One hundred members is too large a group to be able to fully trust every member with security and billing repercussions.

I’ll repeat: ChatGPT Team needs some form, any form, of invitation control.

16 Likes

As a category moderator who reads all posts and was working to get an answer for the ChatGPT Team invite question, I was equally surprised by the response.

In reading your post, two thoughts come to mind:

  • As many have noted, OpenAI has customers willing to pay more for additional products like ChatGPT Team, each with different configurations. In your case, the ability to limit just the owner of the ChatGPT Team account to sending invites is a welcome option or additional plan.
  • The owner of a ChatGPT Team account should have the ability to set a hard limit on the number of seats and adjust it as needed. This way, the current design could allow any user to invite others, but the invite would fail if a new user joins, exceeding the hard limit. Not a perfect solution, but one that might mitigate the problem and be an acceptable change for OpenAI to ChatGPT Team.
5 Likes

What would solve this is an option in the manage workspace settings to allow or disallow Member roles from being able to invite new members.

It’s that simple. This feature would make us happy customers excited to purchase the remaining seats for each member of our organization instead of seeking a refund.

7 Likes

First, I personally do not disagree with you, to me it makes sense to have a user tier that cannot add new members.

But it does appear that this is not a completely unheard of system—but neither are your complaints about it unheard either.

At the end of the day, the best information we have is that this is a deliberate decision by OpenAI—it’s not a bug, it’s a feature.

If this type of permission structure doesn’t work for you, that’s understandable. Not every product is going to perfectly fit the needs of every person or company.

You raise two issues,

  1. Malicious actors are provided an unauthorized access vector to Custom GPT knowledge via every member.

Presumably, this is not a new vulnerability. Every member already has access to the Custom GPT knowledge, right? So there are no new vectors here.

  1. Malicious or non-malicious actors can generate unauthorized billing costs by inviting new members, even outside the range of baseline seats.

This is true. But, my understanding of the thinking is that the fact everyone in your organization is an entity known to you, that you have other recourses available to you.

If they’re a malicious employee, they can be fired and sued by you. If someone accidentally adds 100 seats, it’s maybe an expensive learning experience.

At some point you need to be responsible for who you trust. If you own a restaurant you need to trust your chef isn’t going to poison the guests. If you own a party planning company, you need to trust your employee isn’t going to order 100,000 custom invitations instead of 100. There are all sorts of ways employees can cost businesses lots of money intentionally or not. Early in my own career I made a huge blunder that cost the company I worked for a lot of money.

Could OpenAI restructure the product so that you don’t need to trust your employees and coworkers at all, or so the potential damages are much less? Probably. Should they? I don’t know.

You wrote,

That’s not true. You wrote just a bit earlier that,

So that’s at least one other option, though I understand why you may not like it.

At the end of the day though this is the product. I encourage you to reach out to https://help.openai.com though with your concerns, maybe they’ll change course and modify the permission structure.

You’ve already gotten one direct response from an OpenAI representative stating the permission structure is by design.

Honestly, while I do absolutely see the risks inherent in the current system, I do think the likelihood of some type of catastrophic accident or abuse is vanishingly small.

But, I do hope you find a resolution that sits with you. And, again, I also wish they had a permission structure more in line with what you’re proposing.

1 Like

For the Team plan, adding a permissions layer makes it simple to avoid these losses, so why not?

People should be responsible for whom they trust, but foreseeable losses should be avoided.

The Enterprise plan already has the function to control the inviting permission, it is very easy for OpenAI to apply this permission layer to the Team plan.

6 Likes

@elm, Thank you for your kind and well-thought-out reply. Although you personally do not disagree with the solution, I see a lot of reasons why this shouldn’t be considered an oversight, by design or not.

I respectfully find issue with some of your points and would like to address them:

While it’s true that such permission structures exist on other platforms, the unique context and use cases of ChatGPT Team necessitate a more nuanced approach to invitation controls. Solutions that work for one platform may not be suitable for another, especially when considering the sensitive nature of Custom GPT knowledge and the privacy requirement when providing company-related context.

The issue is not just about existing members’ access to Custom GPT knowledge but the ease with which new, possibly unauthorized, members can be added, thus expanding the risk surface unnecessarily. Limiting invitation rights significantly mitigates this risk. There is a huge difference between allowing a few accounts vs. all accounts to incur unwanted costs if compromised, or to invite numerous new malicious users.

While internal recourse is possible, prevention is always preferable to remediation. The costs, both financial and reputational, of addressing misuse after the fact can far outweigh the benefits of simple preemptive control measures. OpenAI can easily prevent this risk for customers by providing a feature that already exists, and it makes perfect sense to include with the tier of service that is marketed for organizations and businesses of up to 150 people.

Trust within an organization is essential, but so is providing the tools and controls to manage that trust effectively. Trust does not negate the need for the most basic of safeguards against accidental or intentional misuse.

Reimbursing ChatGPT Plus subscriptions is a stopgap, not a solution. It lacks the control and integration capabilities that make ChatGPT Team appealing for organizational use. As companies catch up to the AI revolution and develop compliance and safety policies, as we are, we find ourselves in a situation where neither the less expensive, unmanageable option nor the manageable, more expensive option fulfills our basic security and compliance needs.

Because of this small design choice, a class of companies cannot use any of the subscription platform services provided by OpenAI. Think about that for a second.

With one change to the permissions, every sized company could be an enthusiastic client, as we would like to be.

Without it, companies between 30-150 employees cannot, and should not, according to basic security requirements, buy seats for all of their employees, even if they would like to.

I think we are trying to help OpenAI here. There is another thread where an organization was requesting a refund because of the lack of this capability. If we can acknowledge that this is a deal-breaker for a class of companies and that they cannot bring their business to OpenAI, we can help OpenAI increase their market share and customer satisfaction.

It starts by identifying this as a potential oversight, which OpenAI support has already acknowledged in our communications with them. They were very responsive to our request for support, and I will keep this thread updated.

@elm, please forgive my disagreements and I hope that I have responded respectfully. I am having trouble seeing the value of minimizing the source while not disagreeing with the solution for the problem, but I believe you are doing your best to provide balance and perspective, and for that I am grateful.

I do appreciate your hopes and wishes for us, as they align with what is necessary for us to become the OpenAI-powered company we aspire to be. We would very much like to continue being a strong proponent and avid users of their services as we have been nearly since the beginning.

2 Likes

I understand where you’re coming from, I really do.

My only purpose was to point out that the system is operating exactly as intended.

This isn’t an oversight, it’s a design choice.

Maybe they’ll change their minds, but it’s not as though they were not aware of the ramifications of that choice when they made it.

If you’re taking about the chat interface at help.openai.com, you may have been talking to a robot who tends to be overly agreeable.

The whlole point of this “invitation control” is to avoid the potential risk and avoid the necessity of “fire and sue your employee” trouble.

Maybe this “trust or sue” philosophy works for OpenAI, but please consider that you are providing service for billions of user and millions of companies.

3 Likes

There should be a gate where the admins admit those who got invited.

3 Likes

I would just like to add another voice here - I’m a director of ops for a ~150 person company and am eager to adopt ChatGPT Team, however, the inability to restrict new billable and external users from joining our ‘secure’ team workspace is a financial and security threat to our business.
I put ‘secure’ in quotes because the team workspace is intended to be secure however, if any user can add anyone (accidental or otherwise) it renders the workspace insecure.

2 Likes

Just tried ChatGPT Team and immediately noticed this security threat. If one of my team’s account got hacked, the attacker could generate an absurd bill to the team owner by inviting via csv. The lack of invitation and billing control is absurd. At least allow us to set a hard seat limit.

3 Likes

+1 on this one.

We just decided to go for the Teams license, but this is a major issue. Members should not be able to invite new members at all. It is also not a good thing that there can be no “Management” account, which does not need a license.

In our case we just want to manage the members, but not use ChatGPT with the owner account. I really do not understand why there is no alignment with obvious business practices in terms of permissions and management. We are paying the price you are asking for and are happy about the return on it, but those issues kill any interest to use this on a large scale in our company.

2 Likes

Thank you to everyone who has chimed in on this. I had a feeling we weren’t the only ones facing this problem, and I was surprised to create the first thread in the forums where there is some consensus that this is an issue.

I’ll provide an update on our situation: This has significantly dampened the excitement for the rollout of something we have eagerly anticipated for a long time. After some uncomfortable internal discussions, we determined that we cannot revert to reimbursing for Plus subscriptions, as they don’t provide the level of privacy we require. For our security and compliance policies, we need to ensure everyone in our company is not using ChatGPT where conversations are used for training.

However, we cannot extend Team subscriptions to everyone due to the large number of employees we have, making it cumbersome to monitor the list (it would span 4 pages in the management tab) for unauthorized members. Additionally, it’s not budget-secure. If we allocate our full budget, we have no buffer for unforeseen or malicious billing incidents, made possible by the current invitation and admission controls.

Our discussions with OpenAI Support have provided optimism. I’d like to give a huge shout-out to them for their responsiveness and for making us feel heard. Although it’s clear that the human on the other end is using some version of GPT to draft their final responses, we felt our concerns were acknowledged. They have recognized the potential issue and are discussing internally what actions to take. We are optimistic that they will commit to a resolution soon.

Until a solution is implemented, we feel it best to stick with the seats we’ve already purchased, which is slightly less than half of what we initially budgeted for. We’ve gone through the tedious process of identifying who uses ChatGPT Plus the most with company information, onboarding them with detailed instructions on usage. However, no one feels completely comfortable fully utilizing internal Custom GPTs, as there is a risk of knowledge and instruction extraction if an unauthorized user gains access to the workspace through an unauthorized invitation.

As mentioned before, this has significantly dampened our enthusiasm for something we were very much looking forward to. Additionally, it has created a lot of extra work, primarily for me (see this thread). We’ve had a positive experience communicating with OpenAI Support and remain optimistic, but the situation is far from satisfactory, as I believe OpenAI would prefer. It’s safe to say that OpenAI’s reputation within our organization has been slightly tarnished, though the damage is not beyond repair. This situation is fixable.

ChatGPT Team needs enhanced security features, including invitation controls, to meet the needs of organizations and companies as advertised. Simply migrating the feature from Enterprise of preventing the ‘Member’ role from being able to invite new users would be the best solution, in this writer’s opinion.

We are feeling optimistic.

Thank you for reading :pray:

6 Likes