Beta Features Rollout & Career Opportunities Question

Hey OpenAI dev community! Hopefully all is well with everyone!
So, I have two big questions I figured I could handle in one post:

  1. Does anyone know about how long it takes gpt plus users to have all the features rolled out, or how it’s typically decided who gets what beta feature? I’m assuming there’s way too many users to have people manually deciding this right? I have access to plugins & code interpreter, but I’d really like to explore the custom instructions feature after having consistent issues programming with ChatGPT lately. I’m one of the power users who have been using gpt-4 to help me program all kinds of stuff. Ever since the June update though, as others have already mentioned elsewhere, it’s behavior definitely changed, and its drop in performance with programming has had a genuine impact on how I’m able to use it. It doesn’t handle chain of thought thinking as well as it once used to. I was hoping I could use that feature to help me leverage back some of its pain points, but I don’t have access to it yet, and so far GPT is still the tool that works best for me here. I haven’t tried copilot yet, but it’s not really set up for iterative prompting like GPT is.

  2. How difficult is it to get hired at OpenAI? That might be a silly question I know, but I’ve been trying to break into the tech industry for a while now as a recent grad. I’ve been a self taught programmer since I was a kid, I’m now well into my 20’s, and I graduated witha degree in Applied Linguistics hoping to specialize in computational linguistics or something in the AI / NLP industry. I’ve also taught myself an extensive set of cybersecurity skills over the years as another industry I’m passionate about. I’m very confident in my programming & IT skills and in my linguistics knowledge, but it seems everywhere is either looking for years of professional experience (including entry level jobs pretty much everywhere for some reason) or a Master’s degree now. OpenAI won’t hire anyone based off of proficiency demos or skill demonstrations or other such backgrounds will they?

  1. See if you can’t enable custom instructions right now:

“This feature will be available in beta starting with the Plus plan today” doesn’t describe it only being limited to a few.

  1. The job positions advertised are for pretty high-tier professionals. You can see the background required:

I suspect that is an investor-friendly list, while there may be many recruited, contracted, or outsourced for the many other company roles not so prestigious.

1 Like

Thanks for letting me know about the job stuff! Yeah, I kind of assumed such, but it was worth a shot.
As for the beta problem, strangely enough my desktop version does not have the option pop up to enable that beta feature, which I already checked before I posted this. However, after logging in through my phone for some reason, it does have the option available in the beta features. I’ve closed/opened new window, logged off & on again, etc. but it never showed up. Still doesn’t in fact. Which I find rather unusual.

You can try a private tab and log in with that. It could be cached version of javascript and other client-stored info is not updated.

The feature is basically to just copy-and-paste stuff to every question you write.

You can also be sure you enable - or be sure you block - the tracking, trialing, and experimenting openai does on its lab rat users via statsig and featuresgates.

1 Like

Good to be aware of, thank you.
Do you know if the custom instructions count as part of the 4096 token input limit?
Tbh I’m still torn about the labrat stuff they’re doing. This is the first time I’ve ever interacted with a tool that I actually feel like is getting better if the right people see what I’m doing with it. Granted, the irony here is the honest to god performance dip in the recent update. I’m glad folks published a research paper proving we weren’t crazy, but for a second I was about to share some of the conversations I’ve had with it myself before and after the update to prove the issue is legitimate.
Do we know why OpenAI isn’t being fully honest or transparent about the issue? I feel like if they actually told us what was going on we’d be a lot more understanding, especially in the dev community. Right now it just feels like “hey, uh, the tool’s acting weird what’s going on” and their response is simply “no it hasn’t”, when a lot of us who use it often enough have the receipts to prove it (and so does OpenAI themselves lol).
If they want us to stay for a while, they gotta be careful, because Github Copilot just announced their own chatbot integrated in VScode’s IDE for enterprise/business beta users. GPT plus is still a subscription well spent, but that copilot chat is looking quite nice. If that’s published to individual users at some point in the near future, myself and a lot of gpt programmer power users are gonna flock there pretty quickly, especially if OpenAI refuses to acknowledge and address GPT-4’s significant dip in accurate/reliable code generation.

I can answer the first point with technical information.

First, being a ChatGPT plus user, you get to enjoy GPT-4’s larger context length. However, the actual use of this length isn’t something directly exposed to you or completely within your control.

The insertion of custom instructions must consume tokens, they are presented as part of the input to the AI, but at least they are under your control. With proper backend management (unlike you giving such specific instructions every prompt you put in), the feature might reduce the amount of chat history consumed. We know they will be given again, so maybe they don’t need to be stored as something you just talked about in every single chat history entry.

With ChatGPT, that 8193 token context length has a bite taken out of it as the area where your answers are formed, which is specifically 1536 tokens at last distinct revelation. Take away another 13 tokens just for the overhead of a single question, more tokens for the message telling ChatGPT what it is, more tokens for each activated plugin, of course your chat history, and you have a smaller input than you could expect.

The ChatGPT interface doesn’t count actual tokens, so you are limited in the amount of input you can provide by characters (and one can actually go over this and freeze up a conversation with lots of miscalculated Japanese, for example). It also doesn’t consider how much of your old conversation must be displaced by such a large input. So by “input”, meaning user input, you really are only limited by the web interface, and simply not warned about the loss of coherency of memory when you overwhelm the AI with more inputs.

“Getting better” means different things. If the AI doesn’t generate hate speech and embarrassing screenshots, or allow users to take over developer apps and use them for their own purposes, that’s one definition of better. If not answering the bulk of questions 10% of the time is 1/4th the tokens and less compute costs, another definition of better.

1 Like

Ohh, now this is interesting information!
I mean, I’ve learned about the loss of coherency of memory over time just through experimentation alone, but now things make a lot more sense as to the “why”. Are you suggesting there might have been a recent reconfig of how ChatGPT might be handling its context memory within conversation threads then?
I’ve been doing my best to see how technical ChatGPT can regurgitate information about itself and see just how technical it can help me. I was working with some files and bigger data, and I was trying to see what it’s limits were for processing a certain amount of data at one time. When I learned about its tokens, I’ve tried to see if there was an easier translation mechanism so to speak to condense my input that provided more information using the same token limits and character user input limits (and in code interpreter’s case, the size and data of the files as well). Because once you understand how it processes and “looks” at data through tokens, in theory you should be able to modify or “compress” data to provide it an input carrying the least amount of tokens for whatever you’re trying to ask it. Using code interpreter, you can just input a txt for example that could ask a question or have more data inside it than what’s allowed inside the web interface character limit, but so long as it doesn’t exceed the maximum amount of tokens it can process at once you should be fine.
And yes, everyone does have a different definition of “better”, I’m not gonna argue that. In this case, the beta features, their limitations, and their functionalities are what I’m referencing as “better”. They genuinely do create some legitimate QoL improvements and fascinating scenarios as you’ve already described. Plus, that’s essentially what beta testing is, right? The ones using and testing the beta functions and their insane possibilities for use exchange their information of their usage to the providers of the beta so that it can be refined and (ideally) improved upon. Improving can also mean different things here as well, but the point is improvement at all for these features is good.
I still have my own opinion about several approaches and other things, I do disagree with several things OpenAI does and is attempting to do, while agreeing with others. However, there’s honestly not really any data I’m worried about that I’ve provided in any of my prompting thus far, so the trade off to get greater improvements is fine for me in this case. I have more autonomy over my own data here than with Google, I’ll say that much lol. Plus why worry about that now when this thing was already built using public internet data. It’s likely already scooped up my emo angsty reddit posts from 2007, and trust me, I’d rather leave that in the depts of internet hell than care about how OpenAI is processing my bullshit python code that I could just change myself anyway.

The Custom Instructions are currently available to users from the UK and US only. That may be the reason why you are not seeing it in the interface.

I often feel like exciting beta features reach me late. I’d purchase them if possible, but there’s no option. I’d like access to those betas, if it’s not too much trouble. Ready for them, even with their quirky moments…

Though I’m two PhDs away from the OpenAi career opportunities, if you’re ever looking for cooperation with junior cloud admin(Azure) in Finland, please keep me in mind. :smile: