[Solved] Console Log Responds Cut off

async function makeEmailBlast() {
        let prompt = `write an email blast for the following real estate listing\ndetails: ${propertyDetails}`;
        // console.log(prompt);
        const response = await openai.createCompletion({
            model: "text-davinci-003",
            prompt: prompt,
            temperature: 0.5,
            max_tokens: 100,
            top_p: 1,
            frequency_penalty: 0,
            presence_penalty: 0,


I call this function with the given parameters but the answer keeps cutting off, first I thought it was \n (newline) problem that somehow a newline made the data cut off but no that wasn’t it, so I am just clueless on how to fix this problem? can it be parameter tuning problem?

Try setting the “max_tokens” to 1000

1 Like

what is the purpose of max tokens anyways? is that character limit?

From ChatGPT (all credit to it) - very helpful in answering such questions - ’ max_tokens is a parameter used in the OpenAI API to specify the maximum number of tokens (i.e. individual words or word-like units) that the API should generate in response to a prompt. When you make a request to the API, you can include max_tokens in the request body to control the length of the generated text.
Note that the actual number of tokens returned may be less than the max_tokens if the API reaches a stopping condition before the token limit is reached.

1 Like

thanks for that answer chatGPT

problem has been solved, yes max tokens was the problem, increasing it fixed the problem

1 Like

i increased max_limit to 100 and a got a full response that started repeating again