Hi, I tested the new teams account thoroughly in the last couple of days, in particular the role handling.
Most importantly, I miss a ‘read only’ or ‘use only’ role. I’m really concerned about the current role handling because if everybody is allowed to create new members it will add 25$ automatically to my billing.
Also, if everybody is allowed to create new GPTs, there’s no way to keep a clean and structured team space.
I guess this is meant very much as a business tool, with the expectation that those who are members have some external reason for not adding countless others to the team, like getting fired.
Creating a public team presents some huge challenges if any member can upload a CSV file with ~1,000 email addresses to join the team.
Even if you were Johnny-on-the-spot and deleted them all and the offender, I imagine that would be a huge stressor knowing the potential was there. Tagging @cass to ensure he sees this.
I imagine they are viewing it through a very specific, idealistic, if perhaps a bit naive, lens of “why would a team member ever do anything that could hurt the team” sort of place.
Honestly, I think it’s delightfully cute in its innocence. Exactly the kind of thing a researcher who’s all about the science would do.
Since there are only a few users on this forum who have access to the ChatGPT Team, could you please provide some details about it and include screenshots? I’m sure there are others like me who are eager for this information.
I agree with the previous viewpoint. It is important to have a more secure team that is not disrupted by the actions of one individual. I fully support the proposal and hope it can be addressed effectively.
"Hi there! Regarding workspace management, I believe that limiting the ability to invite new members and create GPTs to only owners or admins could enhance security and maintain order.
I share everyone’s concerns here. There are many use cases where you do not want members to be able to add unauthorized users.
I reached out to OpenAI’s billing support about it, and this is what they said (see below). It sounds like reaching out to them about this will help build internal awareness and could lead to them changing it.
" Thank you for reaching out to OpenAI support. We acknowledge your concerns regarding the responsibility given to members to add new members, and we understand why this may be concerning to you.
Currently, members are permitted to invite others as part of their workspace functionality.
In the event that an unauthorized member is added, please be aware that we are available to assist and will not tolerate any deceitful behavior.
We appreciate your feedback and we have taken note of your comments and will consider them in our internal review process. Your experience and satisfaction with our services are very important to us. Your input is invaluable in helping us enhance our service and make our products more user-friendly for everyone.
Should you have any more comments, suggestions, or other concerns, please feel free to contact us. We’re here to ensure your experience with ChatGPT Team plan is as effective and enjoyable as possible."
Again, ChatGPT Team is a business product, not a personal one.
The idea is that a ChatGPT Team account would exist in an organization where there would presumably be the possibility of adverse consequences for someone on a team who invited others they should not invite.
Business teams with a large number of college interns or with numerous clients who themselves have many team members are two real-world business scenarios that I am aware of ATM where this unauthorized and potential runaway billing concern is slamming the brakes on the use of the platform. Since you can’t exactly fire these types of offenders, I’m not sure how the threat of consequences would apply here.
If you don’t trust the interns, didn’t give them a ChatGPT Team account.
Any well-run business with a large number of college interns likely has the resources necessary to immediately identify and remove improperly added accounts and the account that added them.
If you are so large a company you’re giving your large number of college interns ChatGPT Team accounts, it might be the case that ChatGPT Enterprise is more appropriate for you.
Regarding clients.
Clients probably shouldn’t be in your ChatGPT Team workspace. I can think of no instance where that would be appropriate and make sense. It’s like giving clients access to your internal emails.
Look, I’m not disagreeing that finer granularity of permissions would be nice and a bigger difference between what an admin and a member can do makes sense and would be inline with standard security procedures.
I’m just pointing out that most of the people complaining about this aren’t exactly the types of people for whom this product is intended.
The use case in both these instances is training. Right now, that’s happening in a very resource-intensive way that takes weeks to accomplish because everything has to be done as in-person one-on-ones, which doesn’t scale very well. CustomGPTs have already been built that shorten this training time to a few days and that scale incredibly well because almost no human intervention is needed anymore.
In both cases, the whole point is to give the Team resources to these users to gain these incredible benefits. With the current subscription regime, that benefit has to be weighed against the risk of any member inadvertently or intentionally adding members (which could be innocent enough) and then causing a billing nightmare for the workspace owner.