How can I refine my custom GPT’s instructions to improve natural flow?

I’ve built a custom GPT that processes Excel files and generates leadership-ready status summaries for each sales opportunities.

It’s not working well. I want the model to read comment history and output concise summaries like this:

“The customer is open to doing an onsite survey and the Fastenal rep is very motivated. Kevin K will coordinate with Mike to lock survey dates, and James aims to confirm early next week.”

What I’m getting is:

“Early notes indicate kolton Poe (KADS) flagged a former On-Site account that lost business to Motion, but Motion is likely on their way out. It’s a high-consumption customer once spending $15K on the product in six months currently buying the other product and XYZ through Amazon at $3.90/bottle and $7.50/wipes. Kolton is pushing them toward dilution to highlight savings, storage, and ease of use, and expects approval for a site survey today (10/1/25).. Recent activity shows need to schedule survey, Kevin to get with Mike on a few dates, James will communicate back and lock down.. Current status, the customer is open to us coming onsite. The Distribution rep is very motivated. James will find dates today that our sales rep is available for a site survey. Aiming for early next week for a site survey.. James will coordinate the site survey timing.”

What are the best practices for refining instructions like these so the GPT:

  • Synthesizes multiple comment rows into a single, natural paragraph,

  • Avoids over-templating, and

  • Maintains consistent, leadership-ready tone across dozens of rows?

Any insights on prompt patterns, temperature settings, or system-instruction tweaks would be greatly appreciated!

1 Like

Welcome to the community, @John_Grubbs!

Have you tried a one-shot or two-shot (examples)?

Maybe 2 or 3 comments then the summary you want.

Can you share your custom instructions/prompt?

2 Likes

Agreed - that approach has helped me a lot. I was getting extremely verbose explanations from the LLM till I gave it a sample input and the type of narrative I was after. After that, the explanations were more inline with what I wanted.

1 Like