Anyone have any thoughts on the new "Custom Instructions" in ChatGPT? (Future of OpenAI Thoughts)

Hey all,

Not sure if anyone has seen it yet. But GPT has a new “Custom Instructions” Beta feature under the ChatGPT settings. It’s pretty cool, because it let’s you load some info and system style instructions right in the app. Butttt… as a developer, it kind of worries me because it is one of the first signposts of a strategy, I hope they don’t take.

A while back, I suggested that OpenAI will probably try to be an “everything AI app”. Between the plugins store, the code interpreter, and this new “make your own style GPT”, I’m more confident that they will sweep up all of the low hanging fruit opportunities as this and the broader community experiment with different tools and find product market fit for them. If that is the case, this is pretty bad for pretty much anyone spending money in hopes of making a unique app that can take off and make money.

I don’t really see a point spending time and money on projects that OpenAI and team will end up just implementing themselves and rolling out across their network. Hopefully anticompetitive laws will halt that effort, but that takes a while to take effect usually. My guess is that they will implement document readers, voice synthesis, speech recognition, and/or some sort of custom/personal VDB so GPT can remember you, as some of their next efforts.

Whether you agree with me or not, I wanted to know your thoughts on where you think OpenAI is headed in terms of products. I explained my thoughts solely as context for the type of thinking I am interested in. Your opinion on my opinion is less important, I was just wondering what people’s genuine thoughts were about where OpenAI is going. My hope is they stay just infrastructure, but I’m finding that more and more doubtful.


I kinda wish I could have multiple personalities and attach them to a new chat at will. Otherwise, useful so far.


This has been one of the most request features by the userbase. In the Discord, this was requested maybe a thousand times a day, probably more as I did not catch every request.

It is always difficult when the manufacturer implements a feature you have had a niche in, happens a fair bit with car products and also in the phone and electronics industry. I think OpenAI will continue to put features in if the demand is there.


I feel you but we cannot expect to harvest fruits that are hanging from somebody else’s tree right onto the open street for anyone to grab.

For example we can be sure that persistent file uploads are comming and many “Chat with data and documents” apps will have to defend or start to actually find their niche.


The fact we can “chat” with a machine in natural language and it seems to understand even the tiniest bit seems absolutely wild to me, most morning I wake up and after a few seconds it hits me that AI is real.

Now given that the worlds problems do not vanish the moment you start talking about them, it means that NLP is still a tool and not the answer to everything. If your product is overly simplistic and the only unique thing about it was that it’s now got AI attached, then you are not offering a long term solution.

If you build a product where the NPL element brings value and enhances the overall UX for a product people actually want to use, awesome! You will have customers for life, it’s not so much about the new “thing”, custom prompt memory in this case, it’s about how the user interacts with it, is it easy to use? do it have multiple prompts you can pick from? Do the prompts pick up from your preferences and update automatically? These are all features you could add to give value as an offering.

We have basically just discovered electricity, if the power generator company starts to offer free lightbulbs, build a better lightbulb.

1 Like

I think this is a great step in the right direction for openAI, I’m hoping this feature will be released to the EU & UK soon.

One can only ponder why it hasn’t been released in the EU & UK yet, but my guess is that it has to do with GDPR compliance & privacy rules :laughing:


Consider, every 4 million conversions of their 100+ million users to ChatGPT Plus = A billion dollars in annual revenue.

And I’m guessing the average user’s usage pattern over time is going to be like a new year’s gym membership.

How could OpenAI possibly make that much by being an API chat engine for resellers (that can’t even pretend to be Dua Lipa or Mr Beast like the cringe-worthy competition, or let an orc attack a wizard?)

Also no leaked API key misuse fraud in ChatGPT, or unpaid bills.

And we have a plugin “store” among many other words that could have been chosen…

The wrong move was making every paid juvenile ASCII art nonsense default to GPT-4, which had to be lobotomized to keep up.

Great thoughts.

Agreed. It was strange when function calling came out exclusively for ChatGPT. I would’ve thought that they wanted to see how developers use a somewhat malleable (API) service instead of customers using an interface.

Not only that, it’s a massive job maintaining an app store. Seriously. Why did they want to increase their workload on such trivial tasks?

If there’s no roadmap, I will need to assume one. Silence only works for big companies that have proven philosophies and track records. Silence does NOT work for small companies.

Then they released it for the API way too late. Myself, and most likely other people have built their own versions of function calling that not only are more adaptable, but not just throwing tokens.

Another great point. A burn I share myself.

It’s a fatal mistake. They want to create the gold, lease the land, and also sell the tools. It’s not only greedy, but it’s completely unreasonable for such a small company.

No idea. That’s the problem. Custom Instructions to me seems like a band-aid solution to a greater problem. This, and the function calling feature just seems to be “throw more tokens at it”.

Other LLMs are creeping up that will be purposed for niche tasks. If we can also communicate between these different LLMs, why bother using ChatGPT? For example, things like TypeChat could mean that we can use the GPT api along with other, smaller LLMs to create powerful pipelines pretty quickly.


That’s interesting, I might have to do a little research around those guys to see how they navigate. But yeah, I think you are right, I don’t think there is an end to how many features they will be adding and by proxy, how much customization they take away.

That is a really good point actually, but if they never intended on making a developer centric platform and only intended on making a product, then it was at best misleading to open up an API for people. Microsoft opened themselves up as a platform for people to develop on, but when they decided that it was time for them to make sure no other web browsers could work on their OS, they got hit with United States v. Microsoft Corp.

They’ve shown it was platform, and then they removed the ability for developers to make their own systems based on it. Luckily there are alternatives and GPT4 is likely just a mixture of experts with no real magic behind it anyways, but still, it put a lot of people in the hole. Especially clients I had helped that now have to redo everything from scratch due to the removal of text-completions.

Well I’d say if the power company start’s saying you can’t develop products like that but we can, then it’s probably a monopoly.

I was unaware that it wasn’t available everywhere. But I’ll say it is basically the chat completions API where you load the system card. There is also an information section where you can load common stuff about yourself or the situation. My guess is that they ran into the same problem as me of rewriting everything about what you are trying to do before prompting. This is also the feature that makes me think they will integrate personal or custom VDB connections at some point. The solution is for the AI to remember what you are doing and that is the direction I am headed.

Yeah, I participate in the Semantic Kernel (SK) office hours meetings. They developed their SDK almost entirely around the text completions abilities. Even LangChain put significant effort into making prompt templates support because LITERALLY everyone knows that is how you are going to make custom agents with frame of mind and point of view. For some reason OpenAI is trying to kill that idea in the wild as quickly as possible. The SK guys said they don’t even use the OpenAI API anymore for their planning and templating, and their company literally paid 10 Billion to own almost half of it xD.

It’s going to neuter the ability to do any research with the OpenAI API, which is the only reason I even have an account. So, I will probably cancel my API access here pretty soon and stand up my own Llama system with Mixture of Experts, which will require a lot of effort, but I might be able to offer it as an alternative for my clients who have no solutions anymore. Chat completions is going to be the only one available and it is far more inferior than the text completions. It’s kind of useless for anything else other than the chatting novelty hype. Every other useful product will have to be a jailbreak or overly complicated highly delicate system prompt, but I suspect that’s the way they want it.


I’ve chosen @RonaldGRuckus answer as the solution to close the topic. Doesn’t seem like there is much thoughts about the future or concerns about the current trends.

I’d say that’s the type of maturity in an industry one can build a nice little to medium sized company on, knowing that you will have to redo everything for your customers every other 5-10 years. From the perspective of family, children, loans etc… this doesn’t sound all that bad.

I have to get back to work, but I’d respectfully disagree. A mature API doesn’t remove abilities that quickly. Especially when they have so few to maintain. And you are right, every 5-10 years isn’t bad, but every 5 months is horrible. That’s not a sign of maturity, that’s a sign of them still planning a business strategy.

You go through that level of changes in Alpha. But… I suspect it has nothing to do with maturity as they specifically mentioned that only 3% of users used text completions. Meaning it was solely a business decision. Which makes more sense considering how reliable the text completions were.

1 Like

You can still get “completion” like prompts using the chat endpoints, I do it all the time! Just feed it all in User and finish with “\n\nRespond:”, or something similar that makes sense.

But bringing a System prompt to ChatGPT is interesting, and we’ve had for quite some time in the API. So yes, it does feel like the API and ChatGPT are converging … not sure how I feel about that … but to me it was a huge limitation of ChatGPT, and a big reason why I’ve never used it. Hence why it was “brought to the ChatGPT masses”, I suppose.

But as for roadmaps, a roadmap would make developers less nervous for sure. Nothing kills mojo quicker than the anxiety that all your work will be in vain.


I have been working on the customization of prompting on multiple levels for document store retrieval augmentation + more. You can see my work at trying to open up more of the prompting to be like this, customized and easy to customize on the fly without lots of extra typing each time.

I have output parsing and could easily add a plug-in type method with the vectorstore embeddings or other embeddings pulled from websites etc.

I do see it as sort of annoying how chatGPT is privy to more open ways to deal with these things than we can do through the API etc. Although the pre-prompt has allowed me to make chatGPT work well again for coding, it would not work as well as it did a few months ago until I put in that custom prompt. This is huge, and for my project I have learned all about that, the prompting is so important + history condensing.

Do you control your history and condensing, the context and document sentiment retrieval for context? Those are big and so everyone needs them, definitely need to be in the API too not just in chatGPT. I think these mostly all are, plugins feel like something I need to write up for my own usage with the function insertion and embeddings of text or other objects.

1 Like

I kinda like this feature. it helped me specify a context to my frequent interactions with ChatGPT, which mostly relate to my development work. Now that ChatGPT knows my development environment and my goals, i hope to see more precise and relevant answers to my prompts.

1 Like

if your business model is “use company X to deliver a product very like what company X already delivers, except with some usability tweaks,” then you’re setting yourself up for disappointment. This is true of almost any upstream provider – you have to add something significant and unique to the value proposition to have a real business.

Anti-competition only happens when there’s a legitimate monopoly, which isn’t at all the case in AI models. (Apple doesn’t have a monopoly on cell phones, so they can do whatever they want in the app store, for example.)

Consider Amazon EC2 – they sell message queue services, but there are also hosted kafka service providers that operate on top of EC2 instances. This is not “anti-competitive,” at least in the eyes of current regulation.

I mainly use ChatGPT nowadays strictly for question and answer on tech and programming questions. I’ve given it maxed out instructions to be a loving older brother but my use case doesn’t lend itself for the model to really explore the instruction.

Regardless I think it’s good, I would love to have my own Jarvis and this is a good first step to getting there.

1 Like

Hi, my name is Moe. I’m 63 years old. I received my first computer 8 months ago. My question is, Can I get an account? If so, will CHATGPT allow me to have an account where I can click on an APP, on my desktop. Or can I get an account without a Credit Card?

Welcome to the developer forum, Moe.

You can simply visit that’s it! It’s free and you can use it right away, just type your questions or comments into the chat box and click send. Have fun.


I think it’s unreasonable to think that OpenAI will be limited to a developer api platform. I think it’s great that they opened the plug-in App Store. But I also feel it’s needed for them to continually add functionality that makes the platform better. I don’t see them ever going after verticals like predicting the stock market. That’s the realm of the dev community. But features that enhance the usabilities of their chat app, image creation app, etc. is what I expect and look forward to.