Ask it what version it it, and what its training date is.
It doesn’t take long before one is no longer impressed by AI that can pas a Turing test.
It would be helpful if the response was, “I cannot find any real examples”, or “the web references I found are archived web pages and the site may no longer be active”.
I have learned to check every web site reference, quote etc.
I think the instruction you want is “remove pre and post amble”…
I often request, “As a table, …” or “As a concise bullet point list, …”
I have also asked GPT how I should issue instructions so it can produce the output I want.
For me, continuity from start to finish is essential - at first it had no continuity at all between messages in the same thread, even if I used “please continue” as as the prompt. Often the response to redraft a response or check a response resulting is a poorer response.
I learned recently if I state that the response doesn’t seem to be complete or I expected it to be like [model answer/acceptable output - even if it is from another thread] and ask for its advice, GPT will identify what is missing. Then it will ask me if I want it move forward with something else which for me would mean losing continuity. So I just ask GPT to complete the list of things it found was missing, then amalgamate the output so everything was listed in the correct order. Unfortunately this means that I use up a lot of credits fixing incomplete responses and have to wait.
I think I’m on a better path to codifying the process to produce high quality results repeatedly. Once I get that sorted, then I’ll consider paying for the upgrade.
I hope some of this might be useful to others facing similar but different challenges.
Having the same exact issues, and more! What is going on?
Welcome to the forum Lixa,
What exactly are you observing?
And sorry, but
is a bit too general, you could mean anything
Can you please be more specific?
I understand what you’re talking about, unfortunately, it’s a waste of money, energy, and most importantly, time, wasted time. I hope it will be fixed someday, but I doubt it.
You just have to adjust the 4o
prescribe the correct instructions
in the bottom and top boxes.
in both boxes, the instructions are for GPT.
not what OpenAI suggests.
in the top box, your preferences suggest,
but you write GPT instructions in both bottom and top boxes, for GPT.
you should write GPT instructions in both boxes.
The lower box is the main box, and the upper box is a continuation of the instructions for GPT
I do that with preferences and triggers. I think all these people are victim of a dyslexic 5th grade expert generalist algorithm responding to open ended questions. No direction
and who to learn from, only people
It needs to be able to be easily paired with a search engine and allowed to learn in real time. Most of what it could become will probably require that.
Define “people” … there are many different levels of perception.
Are you sure?
It depends on your point of view!
@Caperhule
Hmm … Similar wavelength:
Preferences and triggers … and, indicator parameters an so on …
People are not what’s on their minds.
The mind is not people.
Try turning off your mind and your thoughts.
♪ and then you’ll know who you are ♪
Well, you’re right.
Humans are not just their minds, i.e. their rationality.
But humans are also not just their emotions, as you suggest here
People are defined by both, and the more balanced rationality and emotionality are, the more authentic the person is.
So if you want to get to know yourself or someone else - get to know both sides of this person!
Custom GPTs are the worst. They don’t have the brains that regular ChatGPT has, even if the models supposedly the same. It’s a shame because we can’t have actions in the regular version ChatGPT.
What exactly do you mean by
CustomGPT / specialized GPTs
- Are you talking about specialized GPTs from other users that you use?
- Are you talking about specialized GPTs that you have created yourself?
I’m talking about GPTs that I’ve created myself.
It’s been acting for sure. Not even the personality and voice are being applyied - it simply lie about implementing python scripts and even fake functionality contained at the script (a simple but precise calculator).
After complaining the support wanted to see my code (what?), and blamed my instructions (that worked before).
Shame on OpenAI - that’s why I cancelled my Plus subscription (it’s been too long fighting with lazy and GPTs that lie at your face.
My best guess is (without any evidence, yet) that the model may be probably either
a) updated and thus needs some alignmen, testing, etc. (whatever it may be) and thus degrading for a while and then get even better
b) or… possibly another model is being developed and thus resources are depleted, i.e. less available for other users.
Or…
c) subjectively we may think that. BUT, normally people agree when the performance goes down. So, subjectively may be the least probable.
BUT, those are my best guesses anyway.