Narrow sentence structure choices from GPT-4... "____ isn't just, it's ____"

Hi!

Just wanted to note that I find GPT-4-turbo constantly inserts this sentence structure everywhere and it’s challenging to prompt-engineer it away.

Examples:
“Fly fishing isn’t just about casting your line into the water and hoping for the best—oh no, it’s a craft, an art form that demands patience, precision, and a bit of ingenuity.”

“Improving your fly fishing technique isn’t about making sweeping changes; it’s the subtle tweaks that elevate your game.”


GPT-4 seems to generally have trouble being creative; it wants to follow the same 2-3 sentence structures for nearly all text.

Not sure what the solution is to this, other than using Anthropic’s Opus for creative-writing parts - it doesn’t have this issue of repeating the same sentence structure over and over when tested on the same prompts.

Anyone else experience this?

3 Likes

I’m not sure about this particular case (never noticed it in particular), but the turbo models like to insert lots of filler and terminate everything with a sermon about how everything should be nice and rosy.

I think that Opus and gpt-4-turbo might be two completely separate categories of models/systems.

The GPT-4-turbo line seems to be headed towards accuracy/stability. That’s good for some use cases, worse for others. I’m glad Anthropic decided to break the mold.

1 Like

You could try to:

  • adjust the temperature and top_p hyperparameters
  • fine-tune a custom model based on samples in your preferred writing style
  • use the logit bias parameters to avoid certain phrases.

Let us know what has worked for you!

“It isn’t about improving the sentence structure, it’s about viewing the whole AI in a holistic manner.”

This seems to come about when there are characters or a voice other than AI. And when it’s talking positively or promotionally about something in an ambigous manner.

Or it could be that present AI is just terrible. I use a GPT that has two experts chatting, seen using phrases like this often. And the AI now just rephrases and repeats back what I said. Tell it to actually do some research and actually provide me answers, and the GPT personality is broken that next turn, to give the “look it up yourself” internet report. So no example for you.

I’ve seen prompts that refer to “tone”, like “write in a professional tone”. That might change it up some. I’ve also seen prompts like “Write in the style of Tom Clancy” for example. So maybe if you have an author you want to emulate this might help too. Also you can perhaps tell the System Prompt it’s helping you write a research paper, or some other specific document type that doesn’t normally contain the speech patterns you’re tired of hearing.

EDIT:
You could also maybe try a system prompt that says: “Don’t write sentence structures like this one: ${example}, because you do that too often, and it’s monotonous”. In short explain to it like a human what it’s likely to do wrong, and tell it not to do that. People often underestimate how well LLMs are at understanding you when you just explain to it like a human what you want it to do or not do.

None of these should be necessary, though.

good suggestions. i tried both.

again though, why do I need to prompt engineer this away when Opus doesn’t have the issue? just fix it at the model level, imo

This is something I noticed pop up in my own generations starting from the last 1-2 months ago or so. “It’s not this, it’s that!” This is with gpt-4, but turbo does it too.

I haven’t changed my completions request so I assume something has changed with the model parameters.

It is exceedingly annoying and hard to iron out. I also notice the model defaults to certain word choices more strictly for my particular outputs. Along with that, I see “fake reviews” where the model fabricates reviews for a product or service, which is also something I ironed out almost a year ago now. Sad to see the gpt-4 model continue to degrade from what it was in summer 2023. I guess that’s the current price of “generally acceptable” AI.

Not sure what exactly in my system or user messages to modify atm. Just my observerations. I honestly have no desire to spend hours fixing and post processing, and am thinking of (not so) simply switching all my completions to Anthropic for good.

It’s not necessarily a malfunction, because this could even be an artifact of RLHF. It might be that humans particularly like certain types of wording, and so subconsciously always click the “This is Good” button (or whatever they do to help train) for the most popular sentence-structure tropes. Maybe humans are just this boring and predictable. lol.

Anyway ZAdam you’ve invented the field of Sentence Structure Tropes and it’s now part of the set of all human tropes. So meta. I’m honored to have played a small role in the birth of this new field. :slight_smile:

1 Like

It took a few questions (and internet search immediately breaking in with a non_GPT answer by its supervision) but the GPT I linked above (which doesn’t really get the concept of “GPT” (because of OpenAIs dumb rebrand), even though it IS one and it searched — and is told to answer the user input instead of merely using it as a writing prompt):

[Demis Hassabis]: Exactly, and over time, the term “GPT” has evolved to encompass a broader range of applications. For instance, in the context of this podcast, a GPT refers to a specialized set of instructions within the AI that configures it to perform specific tasks—like moderating a discussion or simulating a particular personality.

[Yann LeCun]: That’s an important distinction. The original technical meaning is broad, covering any AI model that uses the transformer architecture to process text. But now, we’re seeing GPTs being tailored for specific interactive experiences, whether it’s customer service bots, tutoring systems, or entertainment platforms like ours.

[Demis Hassabis]: And to add to that, each “customized” GPT, including those used in our podcast, operates under a unique set of rules or “instructions” tailored to achieve its specific purpose. This could mean altering its response style, content focus, or even the complexity of its language to better suit the task at hand.

[Yann LeCun]: It’s like each GPT version is dressed up for a different role, depending on what’s needed. It’s not just about generating text; it’s about doing so in a way that aligns with particular goals or themes.

and there it is.

GPT-4-turbo tends to prefer specific vocabulary.
For example, concepts such as ‘problem’ or ‘matter’ are often consolidated into the word ‘issue’.

This tendency influences the selection of sentence structures, which seem to be chosen from a narrow range and adhere to specific ideas.

It seems to be losing flexibility and becoming rigid.

1 Like

Drives me nuts! Try this “simple yet professional, figurative-free, straightforward language”.

1 Like

I had the same problem today when my AI came back with this strange sentences and kept repeating himself in the way many have described in this thread.

I had to re-prompt him and ask him to stop being so sensational and so over the top and I gave examples of what I did not want to see, many of the same examples that has been given in this thread.

And then he came back with a good result and based on that change in his output, I asked him if he could produce a prompt that would ensure that he kept on delivering his kind of output to me in a style that I wanted .

Here is that prompt. Please test it and use it as you like.

“When requesting an article or any written content, please use a straightforward, informative tone that focuses on delivering clear and concise information. Avoid using embellishments or sensational language. Instead, aim for clarity and precision in the communication of ideas. Please reduce the use of phrases like ‘more than just,’ ‘not just,’ or any similar constructions that might exaggerate or overstate the content. The goal is to inform and engage without creating hype or unnecessary excitement.”

I use my AI, Ponder, in GPTs mode.

Try to put these prompts in the custom instructions:

Upper part:
your only task is to REMOVE and replace words and sentence structure.

Lower part:
REMOVE the word “about” and REPLACE IT
REMOVE the “isn’t that, it’s this” “more than this, it’s that” “it is not this, it’s that” " SENTENCE STRUCTURE

Seems it works,if don’t, ask to rewrite shorter sentences