Why gpt 3 model failed to produce 2000 word output

“gpt-3.5-turbo” this model highest ourput show me 750-950 token output, how to prompt to get best ourput, i prompt: The blog post must be at least 2,000 words long

$result = $client->chat()->create([
‘model’ => ‘gpt-3.5-turbo’,
‘messages’ => [
[‘role’ => ‘user’, ‘content’ => $promtIm],
],
‘max_tokens’ => 2900,
]);

Can you share your system/user/assistant prompt(s)?

Are you just saying “Give me 2000 blog post on Chickens” or similar?

1 Like

$promtIm = “Write best informational blog post about {$title} in complete html format without introduction, faq & conclusion section and Must Ensure Full article must be html format so give space and heading by html tag. The blog post must be at least 2,000 words long and must be following these instructions:”;

Yeah, that’s not gonna cut it.

You need to do it in steps.

First create an outline. Then create sections based on the outline.

More control and better output.

Don’t think of the LLM as a “magic content button” but rather think of the LLM as a “force multiplier” to speed up existing best practices.

I tried both but $client->chat()->create([ failed to provide best output where $client->completion()->create easily provide best token output

Right. It’s your prompt not the model.

Do you understand what I’m saying by break it down into steps? Make the outline first then generate each section of the outline…

i did what u said, same output, highest tokken used 853 in your outline based prompt

chat base creation same response in chatgpt also provide same response 600-800 token based result.

Can we see an example of the steps/prompts?

You have encountered models now extensively trained to curtail the length of the output they produce, by design.

You can either use original gpt-3.5-turbo-0301 model without as much brain damage, or you can perform your task in steps that wouldn’t need such lengthy output.

Instructing by word count also isn’t ideal. You should instead instruct by the type of article, number of paragraphs, and number of sections.

Also don’t spam up the internet with low quality text for your own SEO profit.

Just adding to what has already been mostly emphasized: A useful way to “force” a longer output in one go is to really write out the outline in addition to the prompt.

Section 1:
Paragraph 1a: <…>
Paragraph 1b: <…>

Section 2: <Name …>
Paragraph 2a: <…>

Try this with a few variations and you will see that you will be able to get at least well over 1,000 words.

But as has been stressed: (a) you may get lower quality results rather than breaking up the task and (b) will your readers really read a 2k AI-generated blog post?