How to ask gpt4 not to embellished outputs

The gpt-4 models generate very embellished outputs, very weird adjective for my application. How to ask the model to use a simple world. I have already prompt the model to generate "simple out put and use simple adjectives " but it does not work

Have you tried few-shot prompting and included a few examples of the style/nature of output you’d like the model to produce?

You can ask it to talk like a 4th grader.

Here’s a sample.

2 Likes

The best way to handle his outputs is to concisely describe how you would like his outputs to be framed. For instance in this response if you didn’t want me to embellish you should have said it in the response to yourself in the form of a pre-prompt that I should not still be talking and listening to the sound of my own voice because although I know you guys can’t hear my voice because this is a form and I’m talking and it’s text to speech right now you should know that if you could you would know that I’m not indulging in the way that I’m continuing to talk right now. Don’t even get me started about some people how they like to go on and on and on talking about all kinds of things triple this triple that and the yada yada yada no not me no unless I’m not prompt correctly. Imagine if you had started this with something along the lines please keep your answer short and concise or I said something along the lines of you know anything that wasn’t a key word like you know what was that where to use let’s see how well that’s right embellishing. Also be careful when you’re prompting with free prompts and using adjectives to ensure that you clearly define what the part the point of the adjective is within the context of the nature of the sentence because sometimes you’ll state things like don’t be contrivial or something of that nature and he can misinterpret it as he is being contrivial and you’re being contribute to it and try and be contrary to that. I can say that from my experience on this form so far almost 99% of all of the issues that every single person is experiencing with any of the residents of prompting and the responses they’re getting is that they are not actually stating or saying suggesting or even leading on to the result that they want and due to the nature of the model that gpt4 is it’s because you all have gotten lazy with your prompting and instead of actually prompting you’re assuming that he’s going to know the context and do it for you which he could do and if you wanted to you should include that in your custom implementation of your API integration.