I paid 5 dollars ran my first request using example code (provided by chatgpt actually) checked my balance it was $4.20, then 3.60.
Panicked and didn’t run any more requests, and when I checked 10 mins later my balance was zero.
What’s going on?
I’m not inclined to try again!
Hi,
We need your help to understand what happened.
Could you please share the code, including the specific endpoint you called and describe the data that was sent?
It seems you might be sending a large number of tokens (e.g., lengthy documents) with each request repeatedly via the Assistants API.
These pages may also help provide some initial insights:
https://openai.com/api/pricing/
https://platform.openai.com/tokenizer
The good news is: this isn’t supposed to happen. Most likely there’s a doable solution to decrease prices.
Thanks for your quick response
First attempt with python.
The first request I ran didn’t load because of the while true.
I’m wondering if that drained my account immediately and the UI took a few minutes to catch up.
import os
from langchain import OpenAI
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "API Key Should never be included in the code"
from langchain.chat_models import ChatOpenAI
# Initialize the language model (adjust model name as needed)
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
# set up the conversational agent
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
# Initialize memory to store conversation history
memory = ConversationBufferMemory()
# Create the conversation chain
conversation_chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True # Enable verbose mode for debugging/logging
)
#define a chat function
def chat_with_user(input_text):
# Get response from the conversation chain
response = conversation_chain.predict(input=input_text)
return response
print("Chatbot: Hello! Ask me anything.")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
print("Chatbot: Goodbye!")
break
response = chat_with_user(user_input)
print("Chatbot:", response)
#memory management
from langchain.memory import ConversationBufferWindowMemory
# Initialize memory with a window of the last 5 interactions
memory = ConversationBufferWindowMemory(k=5)
Thanks I revoked it.
Was I right about the while loop?
Did it run
chat_with_user(user_input)
99 times until the responses used up all my tokens?
It shouldn’t have, considering you have an input
that blocks the loop from running.
Using LangChain is guaranteed to churn a lot of tokens. The way you’re using it seems fine though. For learning I’d recommend avoiding abstractions like LangChain and just try to manage things like the conversation yourself.
How you burned through $5, no idea.
Lessons learned:
Don’t use langchain unless you have specific reasons and knowledge of what it does - autonomous agent making multiple calls;
Don’t use simply “gpt-4” unless you want to pay for the highest quality model with broad general intelligence.