API gives error in prompt while public site does exactly what it is supposed to do

Hi,

I am trying to sent a list of terms to the API via Python, but i get a strange error. The prompt I am sending gives exactly the result while using in the public chatgpt, but it errors (prompt error) when doing the same via the API:

the code is:

language = 'English'
data = {'BusinessTerm': ['Hotelroom']}  

# Create DataFrame
df = pd.DataFrame(data)

# Define the function to generate definitions
def generate_definition(term):
    try:
        prompt = (f"Can you give me a definition for the following term:'{term}'.please be consise and formal, no longer than 15 words")
        print("Prompt:", prompt)
        response = openai.Completion.create(
            engine="davinci",
            prompt=prompt,
            max_tokens=100,
            n=1,
            stop=None
        )
        input_tokens = len(response.choices[0].prompt.split())
        output_tokens = len(response.choices[0].text.split())
        #print(f"Input tokens used: {input_tokens}, Output tokens used: {output_tokens}")
        return response.choices[0].text.strip()
    except Exception as e:
        print(f"An error occurred while generating definition for '{term}': {e}")
        return None

df['definitions'] = df['BusinessTerm'].apply(generate_definition)

the result is:

Prompt: Can you give me a definition for the following term:‘Hotelroom’.please be consise and formal, no longer than 15 words

An error occurred while generating definition for ‘Hotelroom’: prompt

As you ca see: the printed prompt is not very difficult, and when sending this prompt in the public OpenAi chatGPT it gives a correct answer: what can help?

I tried various prompting techniques (elaborate ones, or very specific, but it all comes down to the same error.

thanks for your thoughts!

I changed the code, especially using another model and increasing the number of tokens worked for me. So, this post can be closed