Reducing filler, fluff, and meta in responses?

I’m using gpt-3.5-turbo-1106 and the chat.completions.create endpoint to write short programming how-to’s. It’s doing an amazing job for the most part.

I realize that writing quality is somewhat subjective. With that in mind, I’d like to eliminate this kind of content from responses.

EDIT: Here’s a final result and you can see how the header and footer don’t really make sense in context. Concatenating strings | The Forkful Programming Cookbook

“fluff” conclusions

(Present in nearly every response)

These examples demonstrate different ways to convert a date into a string using Bash. You can choose the method that best suits your requirements.

By using these methods, you can effectively concatenate strings in Bash for various scripting and programming tasks.

By following these examples, you can easily obtain the current date in your Elixir programs.

Meta: references to the prompt

(Occasionally present)

I hope this meets your requirements. If you need any further help, feel free to ask!

Sure, I can help with that. Here is the article:

Title: Getting the Current Date in Elixir


I believe that gpt-4-* might do better. But I’d (of course) like to stay with the less expensive gpt-3.5-* if I can. Especially since 2021 data is fine for this use case.

I also haven’t tried the JSON-response format. Would it enable more specific requests that could eliminate the fluff?

You can probably just put in the prompt, something like: “Provide only the code and no explanatory text”.

Doesn’t answer your question, but here’s an opinion:

This is purely subjective, but I like what you call meta fluff, especially for generation.

I like to try to elicit a maximally positive reaction to the prompt, and allow the model to waffle on a bit before getting to the meat of it, and then extract the actual product through other means (e.g. asking the model to include delimiters, or using a weaker model to transcribe the content)

Papers about Emotional Tuning

[2307.11760] Large Language Models Understand and Can be Enhanced by Emotional Stimuli
[2312.11111] The Good, The Bad, and Why: Unveiling Emotions in Generative AI

This and similar stuff is coming out from the same circle of authors but non-empirical experience seems to validate it.

1 Like

Thanks, that’s very interesting.

So basically, you do a two-step generation process. Here’s an example of a page I’ve got with a meta header and footer that really don’t work well:

Yeah! you already got the HR in there - if that is a consistent pattern, just trim off the front and the back and you’re good, right?

That’s true, but it changes it up :stuck_out_tongue: It doesn’t always format it like that. Like here: Printing debug output | The Forkful Programming Cookbook

So I’m going to try asking for a JSON response with the parts of the article broken out—intro, examples, conclusion. And maybe get more predictable responses.

Mmh. You might benefit from better prompting - perhaps you could instruct the model on how you want your articles structured.

This isn’t super easy though, sometimes it takes a while to get it right. and you might still have outliers.

1 Like

The issue is likely that you are using the chat completions endpoint and chat models like to chat :slight_smile:

You can take a look at the gpt-3.5-turbo-instruct. This one is specifically for executing instructions.

1 Like

Thanks! Yes, I just learned about the purpose of the instruct models. This seems to make perfect sense for my use case: 'The model `text-ada-001` has been deprecated - #5 by _j

1 Like

You could also send the first response you got from GPT3 back into GPT3 with some type of prompt that request it to simply summarize the facts or strip out non-essential statements. Doing a second pass might further improve the quality of the result as well.