How get api total_tokens with reactjs

how i get total token , this always give an error .

const openaiResponse = response.choices[0].message;

    const tokensUsed = response.choices[0].usage.total_tokens;


    totalTokensUsed += tokensUsed;

    
    return NextResponse.json(openaiResponse);

It should be

const tokensUsed = response.usage.total_tokens;

See Reference page:
https://platform.openai.com/docs/api-reference/chat/object

Just pretty print the entire ‘response’ object and you can be sure what it contains.

works but the numbers is not the same of the usage in the site cosole.log gives 50
site usage gives 72

code:

let totalTokensUsed = 0;

const response = await openai.chat.completions.create({
model: “gpt-3.5-turbo”,
messages,
max_tokens: 1024,
});

	if (!isPro) {
		await incrementApiLimit();
	}
	
	if (response && response.usage) {
		const tokensUsed = response.usage.total_tokens;
		const totaltoken = (totalTokensUsed += tokensUsed);
		// rest of the code
		console.log("used", totaltoken);
	}
	

	
	return NextResponse.json(response.choices[0].message);
} catch (error) {
	
	console.log("[CONVERSATION_ERROR]", error);
	return new NextResponse("Internal Error", { status: 500 });
}

}

work now :smiley: used 66
Resposta da API da OpenAI: {
id: ‘chatcmpl-8DMdLauYgRBfxCzUQOVFZlMhKQxLw’,
object: ‘chat.completion’,
created: 1698195959,
model: ‘gpt-3.5-turbo-0613’,
choices: [ { index: 0, message: [Object], finish_reason: ‘stop’ } ],
usage: { prompt_tokens: 9, completion_tokens: 57, total_tokens: 66 }
}
used 215
Resposta da API da OpenAI: {
id: ‘chatcmpl-8DMkwoiGxh5JPWZnTli3hW8XMUR0V’,
object: ‘chat.completion’,
created: 1698196430,
model: ‘gpt-3.5-turbo-0613’,
choices: [ { index: 0, message: [Object], finish_reason: ‘stop’ } ],
usage: { prompt_tokens: 79, completion_tokens: 70, total_tokens: 149 }
}

tanks works now : used 66
Resposta da API da OpenAI: {
id: ‘chatcmpl-8DMdLauYgRBfxCzUQOVFZlMhKQxLw’,
object: ‘chat.completion’,
created: 1698195959,
model: ‘gpt-3.5-turbo-0613’,
choices: [ { index: 0, message: [Object], finish_reason: ‘stop’ } ],
usage: { prompt_tokens: 9, completion_tokens: 57, total_tokens: 66 }
}
used 215
Resposta da API da OpenAI: {
id: ‘chatcmpl-8DMkwoiGxh5JPWZnTli3hW8XMUR0V’,
object: ‘chat.completion’,
created: 1698196430,
model: ‘gpt-3.5-turbo-0613’,
choices: [ { index: 0, message: [Object], finish_reason: ‘stop’ } ],
usage: { prompt_tokens: 79, completion_tokens: 70, total_tokens: 149 }
}

1 Like

Now next level is to stream the output, not get that information anyway, and count the tokens

do you know to say if that this make the counts before the api answers ? I’m very confused , if i pay for usage i wouldn’t like to pay after the answers of the users from my app.
her part ofthe code : //Send messages to the OpenAI API and get a response

const response = await openai.chat.completions.create({
model: “gpt-3.5-turbo”,
messages,
max_tokens: 1024,
});

	if (response && response.usage) {
		const tokensUsed = response.usage.total_tokens;
		console.log("response:", response);
		console.log("Tokens used:", tokensUsed);

		if (isPro) {
			const CheckTokensUsed = await checkIncrementPro(tokensUsed);
			if (!CheckTokensUsed) {
				return new NextResponse(
					"Pro trial has expired. Please renew your subscription.",
					{ status: 403 }
				);
			} else {
				await incrementApiLimitPro(tokensUsed);
			}
		} else {
			const CheckTokensUsed = await checkIncrement(tokensUsed);
			if (!CheckTokensUsed) {
				return new NextResponse(
					"Free trial has expired. Please upgrade to pro.",
					{ status: 403 }
				);
			} else {
				await incrementApiLimit(tokensUsed);
			}
		}
	}
	


	return new NextResponse(JSON.stringify(response.choices[0].message), {
		status: 200,
		headers: {
			"Content-Type": "application/json",
		},
	});
} catch (error) {
	
	console.log("[CONVERSATION_ERROR]", error);
	return new NextResponse("Internal Error", { status: 500 });
}

}

I’m having a difficult time understanding your objective. Can you have ChatGPT translate your native language?

It seems like you are denying users the answer after you sent it to the API. Instead, check the criteria first: are they paid up? Are they over their quota? Have they been suspended?

The size of the response generation shouldn’t affect whether the user can get their answer. If they use too much (of whatever you measure), it should affect the next time they use the service.

Sorry for my English, in the application I am sending the request to the api and I have the total values of the tokens, the values are added to the database and then I check if the maximum limit has exceeded the limits. There is the pro account and the test account.
To obtain the total token values, I need to make a call to the API that generates the values. Using Tiktokens, is it possible to estimate the values without consuming the API? as a kind of simulator?

Tiktoken can count the AI language encoding tokens of text. It can count and measure the language before you send it to the API, using local code.

Token counting is useful for managing the size of past conversation that you send, for example.

While you can measure the user input, you can’t anticipate the length of the AI response output. AI could just say “hi”, or it could write Shakespeare.

Therefore, it is far more practical to tell the user that they are out of tokens or requests BEFORE they waste their time typing a chatbot input.

1 Like