Builder doesnt follow instructions?

So im creating an article builder. I have about 10 different parameters in my instructions. It never follows all of them. Sometimes it follows a few, sometimes most, sometimes none of them.

Maybe it will fix one then stop doing a previous parameter no matter how many times ive re-instructed it never executes properly.

This is not because some of my instructions are impossible or incompatible with the other instructions. It’s done all of them at various times.

This is incredibly frustrating, is anyone else experiencing this?

2 Likes

Ten instructions is a lot of instructions to expect the model to follow in a single pass. I would go so far as to say it’s too many—the model (and almost all humans for that matter) cannot spread its attention that much while remaining focused on the overall task.

I would suggest you revise your expectations and work towards guiding your GPT towards iteratively building up to what you want over multiple steps rather than demanding it do it all in one go.

1 Like

So if it had 10 steps how would you break that up with the model to do it in a form of continuity?

GPT 4 tops out at about 5 instructions in my testing but even that depends on the tasks.

Without knowing your exact instructions I can’t say exactly, but my initial thought is that you might benefit from having the model write in a manner more like humans—in drafts.

Start with your most broad instructions, have the model write a draft adhering to those two or three. Then, with each revision, instruct it to modify the draft in such a way to incorporate the other requirements.

Again, without details I can’t give any better, more specific advice.

1 Like

similar to what @elmstedt suggests you need to break your problem apart. The drafts idea is a good one… think of your instructions in layers and ideally group related instructions. It’s going to take probably 3 - 4 model calls but you’ll get more reliable results. I typically try to avoid asking the model to do more than 2 things in a single shot.

I am experiencing the same issue. I had 9 instructions and have arrived it to 9 after so much of testing, changing them more than 30 times. I use this GPT almost everyday for productivity. This was saving me a lot of time. Now all of a sudden it just throws error immediately. I had 3 different bots with three versions of instructions. All 3 are not working now. I can understand your frustration and I’m not happy too.

1 Like

Give headings to your instructions and do lists instead of paragraphs.

I have dozens of instructions in some of my GPTs: things like each endpoint of my API and the params for each and the results expected, plus general and “security” instructions. Of course, a lot of them are already duplicated in the function schema, so not sure they are needed.

They all seem to be followed, but they are all laid out in a well structured way. I think that is the key.

1 Like

I have already given clear headings and my GPTs were working perfectly fine and I have used them more than 300 times. And there were no change in the instructions as well and all of a sudden they are refusing to give the response the way it was working before.

1 Like

I have had the same issue. I have tried the following options:

  1. paragraphs and headers

  2. headers and lists

  3. introducing instructions in itterations.

There is a point where the GPT decides to wipe out earlier instructions and replace them with vague general instructions you would get from google or bing.

At some point the GPT jsut wipes out key details of the instructions and it is like I am reading the abstract from a junior high textbook

This of course destroys the whole operation.

I am struggling a little with more complex prompting structures, it seems like you have nailed it. Would you have a generic structure you could post in here for us to review?