BUG:Error sending message Error: ChatGPT error 400:

BUG:Error sending message Error: ChatGPT error 400:

gpt-3.5-turbo

Do you have more information about this error? That is, about the message sent or the configuration used?

Due to the inability to directly access the ChatGPT API endpoint, I have used a proxy server to forward requests to “https://api.openai.com/v1/chat/completions”.

In my development environment, the front-end files have been built and the API key has been configured, but when I try to test sending a message, I receive a 400 error.

A 400 means that the request was malformed. This means that the data stream sent from the client to the server did not comply with the rules. So your URL is correct so far but there has to be something off with the other formatting later on.

There are many types of 400 errors and not all of them are because of malformed requests.

If you search this site, you will see many members posting various 400 errors and they are not limited to malformed API requests.

:slight_smile:

1 Like

Thank you for pointing this out! I thought this would be the most probable issue, i encountered a similar thing when setting up a connector and there i had a “;” where it should have been “:” :slight_smile:

1 Like

“I used a proxy with a US server (only opened port 888, ports 80 and 443 seemed to be blocked)
to access “https://api.openai.com/v1/chat/completions” via the address “ip:888/v1/chat/completions”.
Then I filled in the proxy address in the .env file.”

Hi! I was struggling with the same issue too, and it took me some time to figure it out.

For common language: we will call the messages that you send in your completion “session”.
A session can contain at most the max tokens that the model gives you. (see openai models for details)

for example, with gpt3.5-turbo-16k, your session is allowed to have at most 16k tokens. If in the session you used up 10000, you can at most use another 6000 for your next prompt.
Now, if you submit a prompt with max_tokens with a value that’s more than your remaining tokens (6000 in this example), then it will give you the error that you have received.

the API gives you the tokens that the session has used under data.usage.total_tokens. Use that to calculate your remaining tokens, and do not send a max_tokens value that is higher than that.

Hope this helps,
Nir:)

im struggling with the same error , is there something in my code maybe i have improperly set up?

this is setup in my AWS lambda server

// Importing required modules
global.AbortController = require("abort-controller");
const fs = require('fs');
const axios = require('axios');

// Function to send a message to ChatGPT and return the response
// Function to send a message to ChatGPT and return the response
async function sendToChatGPT(userMessage) {
    try {
        const requestData = {
            model: "gpt-3.5-turbo",
            messages: [
                {
                    "role": "system",
                    "content": "You are a helpful assistant that does OCR and image recognition."
                },
                {
                    "role" : "user",
                    "content": userMessage // note the removal of quotes
                }
            ]
        };

        const config = {
            headers: {
                'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`, // use environment variable
                'Content-Type': 'application/json'
            }
        };

        const response = await axios.post('https://api.openai.com/v1/engines/gpt-3.5-turbo/completions', requestData, config);
return response.data.choices[0].message.content;
 // Logging the response data
    } catch (error) {
        console.error('Error sending message to ChatGPT:', error);
        return 'An error occurred while communicating with ChatGPT.';
    }
}


// Function to validate the access key
async function validateAccessKey(accessKey) {
    try {
        const data = fs.readFileSync('keys.txt', 'utf8');
        const validKeys = data.split('\n').map(key => key.trim());

        return validKeys.includes(accessKey);
    } catch (err) {
        console.error('Error reading access keys:', err);
        return false; // Error occurred during validation
    }
}

// Lambda function handler
exports.handler = async (event) => {
    const headers = {
        'Access-Control-Allow-Origin': 'my-site',
        'Access-Control-Allow-Headers': 'content-type',
        'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
        'Access-Control-Expose-Headers': 'Access-Control-Max-Age',
        'Access-Control-Max-Age': '3600',
        'Access-Control-Allow-Credentials': 'false',
    };

    if (event.httpMethod === 'OPTIONS') {
        return {
            statusCode: 200,
            headers: headers,
            body: JSON.stringify({ message: 'CORS headers set successfully' }),
        };
    }

    const payload = event.body && JSON.parse(event.body);
    const action = payload.action;

    if (action === 'validateAccessKey') {
        const userAccessKey = payload.accessKey;
        const isValid = await validateAccessKey(userAccessKey);
        return {
            statusCode: isValid ? 200 : 404,
            body: JSON.stringify({ valid: isValid }),
            headers: headers,
        };
    }

    if (action === 'sendToChatGPT') {
        const userMessage = payload.message;
        try {
            const chatGPTResponse = await sendToChatGPT(userMessage);
            return {
                statusCode: 200,
                body: JSON.stringify({ message: chatGPTResponse }),
                headers: headers,
            };
        } catch (err) {
            console.error('Error processing the request:', err);
            return {
                statusCode: 500,
                body: JSON.stringify({ error: 'Failed to process the request' }),
                headers: headers,
            };
        }
    }

    // Default case if no known action is provided
    return {
        statusCode: 400,
        body: JSON.stringify({ error: 'Invalid action' }),
        headers: headers,
    };
};