Too much tokens - 1.794 tokens for 60 words - help to understand

I am working onbuilding an own chatbot introducing my own data. I already did but my surprise was the amount of tokens are counted. Here there is a real example:

Prompt: “y qué es una estructura de precio?”

Completion: “Una estructura de precio se refiere a una horquilla de ofertas de compra y ofertas de venta que hay en un activo financiero, que condiciona la formación de precios de los mercados financieros. Los precios se determinan por la oferta y demanda de un activo, y por los conflictos entre compradores y vendedores.”

Here you can see how openai counted those tokens:

19:10
Local time: 15 may 2023, 21:10
text-davinci, 2 requests
1,673 prompt + 112 completion = 1,785 tokens

19:10
Local time: 15 may 2023, 21:10
text-embedding-ada-002-v2, 1 request
9 prompt + 0 completion = 9 tokens

During those minutes ONLY that query explained at the beginning was made.
I am using chat history in order to follow the conversation having. Here is the code:

@bot.message_handler(func=lambda msg: True)
def echo_all(message):
    global qa, chat_history
    try:
        result = qa({"question": message.text, "chat_history": chat_history})
        chat_history = [(message, result["answer"])]
        bot.reply_to(message, result["answer"])
    except:
        bot.reply_to(message, "Actívame pulsando /start.")

Actually I am only interacting with openai in order to fill the variable “result”. This means 2 requests for davinci and 1 request to embedding ada. Can someone explain how this countability works?

Thanks!
Ramon.

1 Like

I would suggest taking a look at the message you are getting back from the completion endpoint. There is a section the response that gives the usage for the request. The prompt tokens are what you sent and the completion tokens are what you get back in the response. That should be accurate.

“usage”: {
“prompt_tokens”: 180,
“completion_tokens”: 41,
“total_tokens”: 221
},

1 Like

In the response I only see ‘question’, ‘chat_history’ and ‘answer’. Nothing related to tokens.

Are you calling the endpoint: v1/chat/completions

I am using library openai with langchain with python

Have you raised this issue on their repository yet? The details matter

This might be the issue, I have also been using langchain, and langchain tends to consume a lot of tokens that aren’t necessarily shown in the output. You can see all those tokens if you set the langchain parameter verbose=True :hugs:

1 Like

I have the same issue. I asked two questions and it cost me $0.26. Using /v1/chat/completions. Code is in JavaScript.