Experiencing Decreased Performance with ChatGPT-4

Doesn’t follow instructions

It is not an instruct model.

picked up its own sentences to sample

I’m not really sure if the context for this complaint as I don’t know what the original prompt was, but often when you ask it to do something it cannot do it will just provide you examples of what doing that thing would look like. This is not new behavior, it’s been that way since jump.

doesn’t understand its output

It does not have the capacity to reference its output while it’s producing that same output. This isn’t something which can be accomplished (at least not reliably) in a single prompt. If you want it to reflect, have it do so after it has completed its response.

and just hallucinating its way out.

Yep, that’s what tends to happen when you try to force the square peg into a round hole.

If you push the model too far outside its wheelhouse it is bound to go wonky. This is not new and should be unsurprising to anyone experienced working with LLMs.

I’m sorry, I’m just not seeing the basis for a real complaint here.

1 Like