Minor change in prompt causing a return of no text using python

Hi I’m a beginner programmer trying to tinker and understand things, and was immediately shut down… The following code works with its current prompt of “:Please write a summary of the following text”. However any change to this seems to break it


import openai

Authenticate with the OpenAI API

openai.api_key = “API-KEY”
model_engine = “text-davinci-003”

Define a function to generate summaries using OpenAI’s GPT-3

def generate_summary(text):
prompt = (
f"Please write a summary of the following text:\n\n{text}\n\nSummary:"
)
response = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=60,
temperature=0.7,
n=1,
stop=“\n”,
)
if response.choices[0].text:
return response.choices[0].text.strip()
else:
return None

Define a function to fetch news from a specific source

print(generate_summary(“With its balmy beaches, laid back lifestyles and holiday vibe, the tropical paradise of Bali has much to offer any world weary traveler”))


However if I change the prompt just slightly it will quickly break and return no text. Something like “elaborate on the following text” or “write a poem including words from the following text” or “summarize using the style of old Victorian English”

All of these things seem to be easy for chatgpt in the browser, but somehow im tripping up. Is there anyone out there that can shed some light?

Increase max_tokens. Your prompt is now much longer and there is no room for it to give a response.

Also (temporarily) remove the stop setting

1 Like

Thanks!.That did seem to work. I generated this script using chat-gpt actually so not entirely sure why it put the stop=“\n” in there.

1 Like

I think the reason is that your poem or text is starting with a new line, where your previous prompt didn’t include it at the start

1 Like

Hello! How do you increase the max_tokens?

Hi and welcome to the Developer Forum!

If you simply omit the max_tokens parameter, the system will automatically allocate the maximum possible for the selected model, otherwise change the value of max_token. remember to leave room for your prompt as that is included when calculating the total message length.

2 Likes

text-davinci-003 is a completion model. It will continue writing in the style of what it just received, even finishing a sentence.

If it feels it is done writing, there is no getting more out of it without prompting it in a different way, or setting the temperature high enough to get a different token than “endoftext”.

Playground preset → press “submit”

This is true on the ChatCompletion endpoint, but on Completion endpoint, it is 16 tokens:

Hello there! I am ChatI, an AI language assistant designed to help

(a little less shown because this is -instruct using a chat container format)

1 Like