HELP! Any Tips on Prompts to Make the New 4o and 4 LESS AWFUL?

Try following these 5 points:

  1. Be Specific: Clearly state what you want.
  2. Use Examples: Provide examples of desired responses.
  3. Break Down Queries: Split complex questions into simpler parts.
  4. Iterate: Refine your instructions based on the output.
  5. Use Constraints: Specify word limits or focus areas.

Example:

Initial Request:

  • “Summarize the economic impact of remote work in 200 words, focusing on productivity, company costs, and employee satisfaction.”

Follow-Up Clarification:

  • “Expand on how remote work reduces company costs, especially regarding office space and utilities.”

Refinement:

  • “Compare the productivity of remote work vs. in-office work in tech and finance sectors.”

I hope my response helps you!

1 Like

here’s a slightly contrarian take, we found shorter prompts actually work better. Specifically type-script like definitions help out quite a bit and get higher accuracy / reliability for most models in our test cases.

I think this ends up working because this way of writing prompts is more compressed than json schema.

Here’s a quick example i did with some chain of thought to extract some data out of an invoice

Given the email below:

###
from: {{email.from_address}}
Email Subject: {{email.subject}}
Email Body: {{email.body}}
###

Answer in JSON using this schema:
{
  order_status: 'ORDERED' or 'SHIPPED' or 'DELIVERED' or 'CANCELLED',
  tracking_number: string or null,
  estimated_arrival_date: string or null,
}

Before you output the JSON, please explain your
reasoning step-by-step. Here is an example on how to do this:
'If we think step by step we can see that ...
therefore the output JSON is:
{
   ... the json schema ...
}'

This gave the output:

If we think step by step we can see that:

1. The email subject clearly indicates that the order has shipped: "Your Amazon.com order of 'Wood Square Dowel Rods...' has shipped!". Thus, the `order_status` should be 'SHIPPED'.
2. The tracking link provided in the email body contains a query parameter to track the package: `www.amazon.com/gp/your-account/ship-track?ie=UTF8&orderId123`. However, no explicit tracking number is given, only the order number. Since a specific tracking number is not extracted, the `tracking_number` should be `null`.
3. The email body mentions an expected arrival date: "your package will arrive: Thursday, April 4". We can extract this date as `Thursday, April 4`. For clarity and consistency, we will store the date in a standard format, such as "YYYY-MM-DD". Assuming the current year is 2023, the `estimated_arrival_date` can be represented as '2023-04-04'.

Therefore, the output JSON is:
\```json
{
  "order_status": "SHIPPED",
  "tracking_number": null,
  "estimated_arrival_date": "2023-04-04"
}
\```

Which we then parsed in:

{
order_status: "SHIPPED"
tracking_number: null
estimated_arrival_date: "2023-04-04"
}
1 Like

oh wow

i thought i forget how to prompt never knew i am not alone in it.

thanks for your post. i hope openai will do something soon.

2 Likes

You´re answer is text book whats wrong with the new gtp-4o, the user asks a question and then gpt-4o answers with something that is completely not relevant but try to sell as if it was. Like you did! spot on mate!

1 Like

Yes I agree! Since April I noticed a lower performance in out put and in some occasion , slow reaction

1 Like

Oh wow! That’s crazy. You think, with something that technologically advanced in a way they would have planned for better infrastructure and customer support

That’s interesting. I guess I have to experiment with different types of prompting

1 Like

consider that until the public release of ChatGPT as a commercial service, OpenAI was primarily research. It’s not all too surprising if you consider that it is likely still a bunch of researchers, not a business as we’re used to thinking of it.
Their customer support is likely outsourced to a third party call center or something like that

1 Like

Don’t you think using the verb “Answer” is a bit unclear? It already knows it’s going to answer you, so you haven’t added any information.

I would use the verb “Extract” or “Parse”, as in: “From the content below, extract order_status, tracking_number, and estimated_arrival_date into the following JSON layout”… The LLM will pay a lot of attention to the main verb in your sentence, so make it the most accurate word in the English language.

1 Like

It’s possible! we’ve found empirically that specific word doesn’t seem to have any issues with it. I think this may be due to the fact that the word “answer” may be similarly enough to the word “extract”?

Also, i don’t think the word answer is implied as technically the model could reply with a question or a validation of what you’re asking it to do:

e.g. “Answer with “ok” if you understand me”.

Yes the LLM will always understand “Answer with JSON” as long as you’ve given it a task to perform, and it knows what information you want. What I was saying is “Answer with JSON” is not a description of a task to perform. The LLM is figuring out what you meant because it’s guessing, not because you told it. I’m just saying you get better results with LLMs when you tell it what you want it to do rather than expecting it to guess.

2 Likes

I got my favourite GPT working again by asking ChatGPT to analyse the prompt and tell me why 4o might struggle with it.

I got a long, detailed response but it mostly came down to this:

"It appears that GPT-4o, like other versions of GPT-4, is designed to handle tasks efficiently, which might occasionally result in the model prioritising brevity or skipping steps.

To ensure thorough adherence to the steps, the prompt should emphasise explicit and direct instructions, reducing the cognitive load on the model and minimising the chance of it skipping steps in favour of efficiency."

I then had it rewrite the prompt for me based on the analysis.

Mostly it wrote a more concise prompt. It ditched a lot of the detail that I had specifically added to originally get GPT-4 to follow the instructions properly.

That overhaul helped a lot, the GPT started following the step-by-step instructions again.

But it still wasn’t perfect - I had to test and tweak a lot, and I also had to examine some areas where 4o was insiting on following a particular format, which meant it was giving overwhelming responses (too much information in a single response, in two different lists)

By examining what it was doing, and adding in steps to my prompt to get it to do the same thing, but broken into separate steps, I got good results.

So my main findings were - shorter, simpler, more concise prompt, and leaning into the behaviour it seemed to want to follow.

Nothing earth shattering there, but so far my GPT is now back to following the step-by-step instructions consistently and giving much more useful output again.

So I figured I’d share, in case that gives anyone else hope that their GPT is salvagable!

3 Likes

that’s spot on, I thought it was only me and that I was crazy

1 Like

I’ve been facing some brutal difficulties. Hopped to Claude for some stuff but just want ChatGPT to work to be frank. All sorts of repeating copy and frustrating responses. Just not an ideal situation by any means. I had GPT’s working with liquid gold knowledge but they all act a bit funny now. Hoping this gets fixed soon. Been reverting to the human way of writing stuff recently…fun sometimes, but major pain in the tail most.

2 Likes

give the bot information to work with.: “you are a subject matter expert, with 25 years experience in sales & customer service. You are a renowned professor at a reputable business school. You have been tasked with consulting a smaller company, who are losing customers & doing poorly in sales. Please consult the customer on how to improve their business processes, how to improve sale & win/make new, long-term friends. Provide support in managing your people & training, to focus on customer excellence. Share about the value of being a humble, serving leader, instead of being a self-serving one.”

A little something like the above.

instead of “my business is failing, sales are. off and customers are not happy and the owner is a jerk!”

{yes, I know sometimes…we’d love to voice it…just for fun!:sunglasses::yum::smiling_face_with_three_hearts:}

2 Likes

just adding in a bit of general info: when you do general inquiries, do them in your temporary session. This. allows chat to learn more about your type of general knowledge/prompt style, and you don’t feed in the general knowledge & questions into the data base of chat. Which contributes to the “dumbing down” of the bot intelligence. Which. is unfortunately what & how many do. They use the 3.5, 4&4.0 to conduct their general & direct inquiry, as well as important/business related inquiry/prompts & this lowers the performance of ChatGPT. Instead of the temporary & deleting that stuff from the history/database. Many such instances could/should be handled through Google.

2 Likes

I saw someone recently say on threads that no matter what they are asking, even if it’s a quick cooking question, to start the prompt with “Acts as a _____”. I know this prompting 101 and seems like it’s from the old days, but I actually have seen a huge difference including this before anything I say. Just say to act as the expert of whatever category you’re asking about, and it’s a lot more consistent

2 Likes

Model behavior is what I call “wobbly” and getting it back on target is a skill worth having. We are all still early and I count it all joy to have the drive to work AI to get it to do tricks for me. The majority of the public won’t. That makes us, the future of the skilled workforce :muscle:

1 Like

If you’re looking for tips on making prompts in ChatGPT-4 more effective and less frustrating, I highly recommend trying out Ultra AI. I’ve been using it for around six months, and while there’s a small learning curve, it’s incredibly powerful and unlike any other prompting tool I’ve used.

Ultra AI really shines with features that can address the main challenges of ChatGPT-4, such as:

  1. Multi-Step Prompts: It lets you create seamless, chained prompts that flow logically from one step to the next, solving the issue of constantly having to restart or rephrase your prompts【9†source】【10†source】.

  2. Organized Prompt Library: You can save and categorize prompts, making it much easier to handle recurring tasks without needing to start from scratch【9†source】.

  3. File and YouTube Integration: Ultra AI allows you to upload files or interact with YouTube content directly within ChatGPT, which can really enhance your productivity, especially if you’re working with longer documents or videos【10†source】.

It’s also worth noting that as OpenAI updates GPT, minor bugs might appear. However, every time I’ve reached out to Ultra AI’s support (three times so far), they’ve been incredibly responsive, resolving every issue quickly and efficiently.

And just to clarify—I’m not affiliated with them in any way, I’m just a fanboy sharing what’s worked for me! If you’re curious, you can find more info here:

Hope this helps smooth out your ChatGPT-4 experience! …of course I used AI to write this response in an AI forum. doh!

this is a very good way to look at it, while gpt encourage conversational
prompt, it is by nature a program and thus would perform better with better parameters