How to guide GPT towards answers of a certain length?

LLMs have trouble counting the number of words in their responses. When asked for an answer with 500 words, they often provide 300 words. When asked for 1,000 words, often less than 500 are returned.

What tips and techniques help guiding an LLM towards a response of a longer length?

(I have tried one shot, providing one example, but that hasn’t helped.)

1 Like

**

temperature or max tokens

**

so how big of responses are u usings Couse there’s a couple methides I’ve used in the past such as chunking it or just making a directory for the AI’s api… and they both have their tradeoffs as well like chunking can result in super large prompt responses and can Couse inaccuracy’s which u will have to train it to do it but the prosses can be slow sometimes depending on the size and amount of response and how u break it up but it can also work varry well if u do it right same.

making a directory for Ur AI is a little more accurate but can have a slow response time do to the ai trying to analyze the directory to find the current response depending on how many u files in the directory and how u break it up can make it a bit challenging like the temperature or max tokens can be more or less u have to do some math, trial&error… u know don’t be discouraged

i mean i working with no examples because there’s so many I’ve seen if u showed an example would help.

i would look into the token system because that determines the number of characters and their value and like temperature is for accuracy and possessing values of the ai in the api files of the desired AI

Hiya,

I’m just guessing, but perhaps the model is just answering your question in the most succinct way possible. Why use 500 words when 300 words will do?

  • One simple way to get it to produce the proper length of words is to explain to it why it’s necessary. “Can you please produce me a blog article of at least 1000 words using the list of keywords we generated earlier. I need at least a thousand words because I am seeking an article with a medium-level of detail for a how-to page.”

  • I stumbled on this article today, GPT Best Practices.

  • Based on that, you can append a standard set of instructions to the query to give it context: “Please write an article of 1000 words. Feel free to take your time to think about your answer. Please provide me with references to further work whenever possible. Otherwise, use examples, quotes, or anything else you feel relevant to succinctly reach our word-count using the keywords we generated for this topic.”

If you wanted to really reach that word count of yours, tell it to do it’s responses in the style of Ralph Waldo Emmerson.

OK… great example i was working on a sub directory by importing folders and content as reference and the ai in the api will understand if u put the correct coding like i made this real quick for an example. is not working but can make it work
“”"

import json
def import_documents(file_path):
with open(file_path, ‘r’) as file:
documents = json.load(file)
return documents
def compare_documents(article, documents):
matching_documents =
# Split the article into chunks of characters up to 1000
article_chunks = [article[i:i+1000] for i in range(0, len(article), 1000)]
for document in documents:
for chunk in article_chunks:
if chunk in document:
matching_documents.append(document)
break
return matching_documents# Example usage
file_path = ‘documents.json’ # Replace with the actual file path
article = “This is the article content.” # Replace with your article content

documents = import_documents(file_path)
matching_docs = compare_documents(article, documents)

Output the matching documents

for doc in matching_docs:
print(doc)

this is an example of the same thing but with the abilities to use outside Resorces and u just add to it dud ur not giving any information other than trying to raddle off ideas **"import json

def import_documents(file_path):
with open(file_path, ‘r’) as file:
documents = json.load(file)
return documents

def compare_documents(content, documents):
matching_documents =

# Split the content into chunks of characters up to 1000
content_chunks = [content[i:i+1000] for i in range(0, len(content), 1000)]

for document in documents:
    for chunk in content_chunks:
        if chunk in document:
            matching_documents.append(document)
            break

return matching_documents

def import_external_document(file_path):
with open(file_path, ‘r’) as file:
document = file.read()
return document

def analyze_blog_pages(blog_pages, documents):
matching_docs =

for page in blog_pages:
    matching_page_docs = compare_documents(page, documents)
    matching_docs.extend(matching_page_docs)

    if len(matching_docs) > 0:
        break

return matching_docs

Example usage

file_path = ‘documents.json’ # Replace with the actual file path
external_doc_path = ‘external_document.txt’ # Replace with the actual external document file path

documents = import_documents(file_path)
external_doc = import_external_document(external_doc_path)

Example blog pages

blog_pages = [
“This is the content of the first blog page. It has some information related to the topic.”,
“The second blog page continues the discussion with more details and examples.”,
“On the third blog page, we dive deeper into the subject matter and provide insights from experts.”
]

matching_docs = analyze_blog_pages(blog_pages, documents)
matching_external_docs = compare_documents(external_doc, documents)

Output the matching documents

print(“Matching documents from blog pages:”)
for doc in matching_docs:
print(doc)

print(“\nMatching documents from the external document:”)
for ext_doc in matching_external_docs:
print(ext_doc)
“”

Summary

each problem has to be handled different there not all the same

** trial error, dude it worked when i put my correct information in but not in all cases it in which requires editing and trying diffrent things in diffrent ways. it’s all trial and error, I’m working off my head u can ask ChatGPT if u don’t know very useful tool.

1 Like

Maybe the problem is actually in the approach.

Why would 1000 words be needed if 500 does the job for a specific task?

If you want longer output you prob need to enrich your instruction so the LLM has more to talk about.

I usually prompt for an outline first, then ask it to make the outline more comprehensive / add additional paragraphs etc. Then instruct it to use the outline to generate the final output.

This increases the word count significantly (and you end up with a richer output instead of an output packed with fluff).

1 Like