ChatGPT 4 is very confused about its expanded capabilities

I don’t know how many others are running into this, but I’m pretty regularly getting messages like “However, I can’t access real-time databases or external systems…” when I explicitly ask for it. After a bit of arguing, it eventually concedes and performs the search just fine.

It also will often tell me that it can’t provide links or other specifics (like news article titles or somesuch) - when just a message or two prior it will have included an excerpt from a titled article, and provided a link.

I get that these are likely consequences of kludged-in expanded capabilities, but it seems like a simple “check my abilities before saying I can’t do something” consensus logic would be pretty simple to implement. I doubt exceedingly competent devs like the openAI team have simply overlooked this, so it’s probably a deeper bug that isn’t likely to be resolved prior to a new major core - but I’d be remiss not to mention it (and also give others a place to +1 instead of additional new topics).

So. Anyone else?

(PS - I tried searching for this prior to posting, so if it’s duplicate, blame the somewhat dismal forum search functionality…)

My ChatGPT 3.5 thinks it can run Python…and has the UI for it up to the error placed in an assistant role. They placed a new tool in free ChatGPT to confuse the AI.

The problem is pretraining. The same problem API developers have dealt with in what are now basically “ChatGPT” models, tuned on language ChatGPT has produced before, besides that supervised learning specifically placed for the refusals.

The AI both can have those capabilities, and also needs to deny those capabilities so it doesn’t fabricate information.

In the case of when it “has”, you drop a bit of “how I want you to act” custom instructions into ChatGPT, with true capabilities of your GPT-4 mode, so it prefers doing instead of denying.

Also, you can make it quiet down about the “As of my last knowledge update”, giving replacement “here’s all I know”, and then go after “For the latest and most detailed information” also. Make it always say “Sure”, never “I’m sorry”. You can be a bot programmer too, reweighting the fine-tune with your prompting.