Why am I getting Error 429 insufficient quota with unused credits?

I am left with API tokens (used 1,15,000 out of 2M), and the credits used is minimal ($0.05 out of $5)., still API throws error 429 (insufficient_quota) it’s my third day suffering from the same problem. I have not even exceeded RPM / RPD .

1 Like

Welcome to the community, @onscreen .

Are the credits/tokens expired, by chance?

If you’re usage tier is low, you won’t be able to access some of the newer models. There’s also a requirement for verification for some of the newer models.

Can you give us more details?

I am currently using the GPT-4o mini model for my project, but I’ve encountered an issue with my API key. The key was purchased only 15 days ago, and my tokens and credits remain active and unexpired. Despite this, the key has stopped functioning as expected.

1 Like

That makes little sense. The API use is funded by prepaid credits that are valued in dollars. You don’t have “tokens” issued to use, as the cost of model usage varies. The rate limits are by the minute, and would quickly reset.

Check your account balance that would be used for funding API calls:

https://platform.openai.com/settings/organization/billing/overview

A “key” is not “purchased”. It is just an authentication method; you can create many. You have an OpenAI organization with funding.

Despite generating multiple new API keys within the same project and organization, the issue remains unresolved

It is easy to mistake the “add payment method” $5 authorization “test” as being an actual payment. That is just a temporary credit card hold. You must deliberately make a purchase of credits.

After the “start building” workflow where OpenAI will give you an example of an API call using a gpt-mini model, you do get a small amount of free usage to test that call with a new account, which can be confusing.

After following the link I gave above, you can also go to “credit grants”, and see the individual line items of credit purchases. Nothing there? You never bought any credits, and it’s time to transition to paid use. You can put in something like $5.22 as a new credit purchase to distinguish it in banking from your credit card authorization that will be returned. You might have to add $6+ if you ran your account negative.

The API keys you’ve generated will work when they are backed by money. Good luck!

Hi and welcome to the community!

Reading your posts I am under the impression you are not using platform.openai.com

Especially these two statements do not sound like an issue with the official platform:

Did you purchase access using another provider/reseller?

1 Like

I’ve bought my own OpenAI key, but I’m getting 429 errors(insufficient quota ) in FastAPI. My local model works fine with the exact same code, so the bottleneck is definitely on the OpenAI side.

It’s not possible to purchase API keys from OpenAI.

One creates an account, purchases credits and creates their own API keys.

1 Like

Yes sir, I have purchased credits and created my own API key , here is a images displays the token used and credits remained.

1 Like

Image displaying “Credit Grants”

1 Like

Okay, great!
Can you share your code that’s only returning the 429 error from the API?

This prototype utilizes a dual-language architecture for handwritten text extraction. The front-end interface is managed via C#, which handles the image input and passing, while a Python backend leverages OpenAI’s models to perform the optical character recognition (OCR) and text extraction.

Here is the python code (can’t mention link in the post)
import os

import base64

from pathlib import Path

from typing import Annotated

from datetime import datetime

from fastapi import FastAPI, UploadFile, File, HTTPException, status

from fastapi.responses import JSONResponse, PlainTextResponse

from fastapi.middleware.cors import CORSMiddleware

from pydantic import BaseModel

from openai import OpenAI

from dotenv import load_dotenv

import uvicorn

load_dotenv()

api_key = os.getenv(“OPENAI_API_KEY”)

if not api_key:

raise RuntimeError("OPENAI_API_KEY not found in .env file")

client = OpenAI(api_key=api_key)

MODEL = “gpt-4o-mini”

MAX_TOKENS = 1000

TEMPERATURE = 0.15

PROMPT = “”"You are an expert at reading messy handwritten student answer sheets.

Extract ALL visible handwritten text exactly as written.

Preserve:

- Original spelling, grammar, mistakes

- Line breaks (use \\n)

- Question numbers if visible

Do NOT correct, summarize, explain, or add anything.

Return ONLY the raw extracted text. Nothing else.“”"

INTERPRETATION_PROMPT = “”"You are an expert at interpreting and describing student-drawn diagrams, such as flowcharts, mind maps, UML diagrams, scientific sketches, or any visual representations.

Analyze the image semantically:

- Identify the type of diagram (e.g., flowchart, process diagram, entity-relationship, sketch).

- Understand the elements: shapes (boxes, circles, arrows), labels, connections, flow/direction.

- Interpret the meaning: what process, concept, or system it represents.

Then:

1. Generate a concise TITLE for the diagram (1-2 sentences max).

2. Describe in FLOWCHART FORMAT if it fits (e.g., step-by-step with arrows →).

  • Use simple text notation like:

    Start → Step 1: Description → Decision? Yes → Step 2 → End

    No → Alternative path

3. If not a flowchart, describe in PARAGRAPH FORMAT (detailed narrative explaining elements and relationships).

Preserve:

- Original intent and any student errors/misconceptions (note them if obvious).

- Use clear, educational language.

Output EXACTLY in this structure (nothing else):

Title: [Your title here]

Description Type: [Flowchart, Paragraph]

Description here - use \\\\n for line breaks

“”"

ALLOWED_EXTENSIONS = {“.png”, “.jpg”, “.jpeg”, “.tiff”, “.bmp”}

app = FastAPI(

title="Handwritten text extractor and Diagram Interpreter",

description="Extracts text and/or interprets diagrams from handwritten answer sheets",

version="0.1.2"

)

# Enable CORS for frontend access (adjust origins in production)

app.add_middleware(

CORSMiddleware,

allow_origins=\["\*"\],  # In production: \["http://localhost:3000", "https://your-frontend.com"\]

allow_credentials=True,

allow_methods=\["\*"\],

allow_headers=\["\*"\],

)

class TextResponse(BaseModel):

text: str

filename: str

tokens_used: int

status: str = "success"

class DiagramResponse(BaseModel):

title: str

description_type: str

description: str

tokens_used: int

filename: str

status: str = "success"

class CombinedResponse(BaseModel):

text: str | None = None

diagram: DiagramResponse | None = None

total_tokens: int

filename: str

status: str = "success"

def image_to_base64(content: bytes) → str:

return base64.b64encode(content).decode("utf-8")

async def call_openai_vision(

prompt: str,

image_content: bytes,

detail: str = "low",

max_tokens: int = MAX_TOKENS,

) → tuple[str, int]:

try:

    if not image_content:

        raise ValueError("Empty image content received")



    b64 = image_to_base64(image_content)



    response = client.chat.completions.create(

        model=MODEL,

        messages=\[

            {

                "role": "user",

                "content": \[

                    {"type": "text", "text": prompt},

                    {

                        "type": "image_url",

                        "image_url": {"url": f"data:image/jpeg;base64,{b64}", "detail": detail},

                    },

                \],

            }

        \],

        temperature=TEMPERATURE,

        max_tokens=max_tokens,

    )



    if not response.choices or not response.choices\[0\].message.content:

        raise ValueError("No valid response content from OpenAI")



    content = response.choices\[0\].message.content.strip()

    tokens = response.usage.total_tokens if response.usage else 0



    return content, tokens



except Exception as e:

    \# Better error propagation for endpoints

    raise HTTPException(

        status_code=500,

        detail=f"OpenAI Vision API error: {str(e)}"

    )

@app.get(“/”, summary=“API health check”)

async def root():

return {

    "message": "Handwritten OCR + Diagram API is running",

    "docs": "/docs",

    "health": "ok"

}

@app.get(“/ping”)

async def ping():

return {"message": "OCR API is alive 🚀"}

@app.post(“/extract-text”, response_model=TextResponse)

async def extract_text(

file: Annotated\[UploadFile, File(description="Image of handwritten answer sheet")\]

):

if not file.filename:

    raise HTTPException(400, "No filename provided")



ext = Path(file.filename).suffix.lower()

if ext not in ALLOWED_EXTENSIONS:

    raise HTTPException(415, f"Allowed formats: {', '.join(ALLOWED_EXTENSIONS)}")



content = await file.read()

if len(content) > 10 \* 1024 \* 1024:

    raise HTTPException(413, "File too large (max \~10 MB)")



text, tokens = await call_openai_vision(PROMPT, content, detail="low")



print(f"\[{datetime.now():%Y-%m-%d %H:%M:%S}\] TEXT JSON {file.filename} → {tokens:,} tokens")



return TextResponse(

    text=text,

    filename=file.filename,

    tokens_used=tokens,

    status="success"

)

@app.post(“/extract-text/plain”, response_class=PlainTextResponse)

async def extract_text_plain(

file: Annotated\[UploadFile, File(description="Image of handwritten answer sheet")\]

):

if not file.filename:

    raise HTTPException(400, "No filename provided")



ext = Path(file.filename).suffix.lower()

if ext not in ALLOWED_EXTENSIONS:

    raise HTTPException(415, f"Allowed formats: {', '.join(ALLOWED_EXTENSIONS)}")



content = await file.read()

if len(content) > 10 \* 1024 \* 1024:

    raise HTTPException(413, "File too large (max \~10 MB)")



text, tokens = await call_openai_vision(PROMPT, content, detail="low")



print(f"\[{datetime.now():%Y-%m-%d %H:%M:%S}\] TEXT PLAIN {file.filename} → {tokens:,} tokens")



return text  # Returns raw extracted text as plain string

@app.post(“/interpret-diagram”, response_model=DiagramResponse)

async def interpret_diagram(

file: Annotated\[UploadFile, File(description="Image containing a diagram")\]

):

if not file.filename:

    raise HTTPException(400, "No filename provided")



ext = Path(file.filename).suffix.lower()

if ext not in ALLOWED_EXTENSIONS:

    raise HTTPException(415, f"Allowed formats: {', '.join(ALLOWED_EXTENSIONS)}")



content = await file.read()

if len(content) > 10 \* 1024 \* 1024:

    raise HTTPException(413, "File too large (max \~10 MB)")



raw_response, tokens = await call_openai_vision(

    INTERPRETATION_PROMPT,

    content,

    detail="high",

    max_tokens=1200,

)



print(f"\[{datetime.now():%Y-%m-%d %H:%M:%S}\] DIAGRAM JSON {file.filename} → {tokens:,} tokens")



\# Parse the structured output

lines = raw_response.splitlines()

title = ""

desc_type = ""

description_parts = \[\]



for line in lines:

    stripped = line.strip()

    if stripped.startswith("Title:"):

        title = stripped.replace("Title:", "", 1).strip()

    elif stripped.startswith("Description Type:"):

        desc_type = stripped.replace("Description Type:", "", 1).strip()

    elif stripped:

        description_parts.append(stripped)



description = "\\n".join(description_parts).strip()



if not title or not desc_type:

    raise HTTPException(422, "Model did not follow expected output format")



return DiagramResponse(

    title=title,

    description_type=desc_type,

    description=description,

    tokens_used=tokens,

    filename=file.filename,

)

@app.post(“/interpret-diagram/plain”, response_class=PlainTextResponse)

async def interpret_diagram_plain(

file: Annotated\[UploadFile, File(description="Image containing a diagram")\]

):

if not file.filename:

    raise HTTPException(400, "No filename provided")



ext = Path(file.filename).suffix.lower()

if ext not in ALLOWED_EXTENSIONS:

    raise HTTPException(415, f"Allowed formats: {', '.join(ALLOWED_EXTENSIONS)}")



content = await file.read()

if len(content) > 10 \* 1024 \* 1024:

    raise HTTPException(413, "File too large (max \~10 MB)")



raw_response, tokens = await call_openai_vision(

    INTERPRETATION_PROMPT,

    content,

    detail="high",

    max_tokens=1200,

)



print(f"\[{datetime.now():%Y-%m-%d %H:%M:%S}\] DIAGRAM PLAIN {file.filename} → {tokens:,} tokens")



\# Parse and format as readable plain text

lines = raw_response.splitlines()

title = ""

desc_type = ""

description_parts = \[\]



for line in lines:

    stripped = line.strip()

    if stripped.startswith("Title:"):

        title = stripped.replace("Title:", "", 1).strip()

    elif stripped.startswith("Description Type:"):

        desc_type = stripped.replace("Description Type:", "", 1).strip()

    elif stripped:

        description_parts.append(stripped)



description = "\\n".join(description_parts).strip()



if not title or not desc_type:

    raise HTTPException(422, "Model did not follow expected output format")



plain_text = f"Title: {title}\\n\\nDescription Type: {desc_type}\\n\\n{description}"



return plain_text

@app.post(“/combined”, response_model=CombinedResponse)

async def combined_extract_and_interpret(

file: Annotated\[UploadFile, File(description="Image — answer sheet with possible diagram")\]

):

if not file.filename:

    raise HTTPException(400, "No filename provided")



ext = Path(file.filename).suffix.lower()

if ext not in ALLOWED_EXTENSIONS:

    raise HTTPException(415, f"Allowed: {', '.join(ALLOWED_EXTENSIONS)}")



content = await file.read()

if len(content) > 10 \* 1024 \* 1024:

    raise HTTPException(413, "File too large (max \~10 MB)")



text, text_tokens = await call_openai_vision(PROMPT, content, "low")

diagram_raw, diagram_tokens = await call_openai_vision(

    INTERPRETATION_PROMPT, content, "high", max_tokens=1200

)



total_tokens = text_tokens + diagram_tokens



print(f"\[{datetime.now():%Y-%m-%d %H:%M:%S}\] COMBINED JSON {file.filename} → {total_tokens:,} tokens")



\# Parse diagram

lines = diagram_raw.splitlines()

title = ""

desc_type = ""

description_parts = \[\]



for line in lines:

    stripped = line.strip()

    if stripped.startswith("Title:"):

        title = stripped.replace("Title:", "", 1).strip()

    elif stripped.startswith("Description Type:"):

        desc_type = stripped.replace("Description Type:", "", 1).strip()

    elif stripped:

        description_parts.append(stripped)



description = "\\n".join(description_parts).strip()



diagram = None

if title and desc_type:

    diagram = DiagramResponse(

        title=title,

        description_type=desc_type,

        description=description,

        tokens_used=diagram_tokens,

        filename=file.filename,

    )



return CombinedResponse(

    text=text if text.strip() else None,

    diagram=diagram,

    total_tokens=total_tokens,

    filename=file.filename,

)

@app.post(“/upload”)

async def upload_image(file: UploadFile = File(…)):

print("Received upload request!")               # ← add this line

print(f"→→→ Received file: {file.filename}")    # optional – keep or remove

\# ... rest of the function (you can leave it empty for now)

return {"message": "Upload received", "filename": file.filename}

if _name_ == “_main_”:

uvicorn.run(

    "app:app",  # Change to "main:app" if your file is named main.py

    host="0.0.0.0",

    port=8000,

    reload=True

)

I see it now. The issue is elsewhere, not in your code. The most likely cause is that your account is still on the Free Tier, even though you have purchased enough credits to qualify for Tier 1. You can confirm this by checking your limits page.

Your organization’s rate limit tier will be recalculated when you make your next purchase for the minimum amount.

I hope this helps.

1 Like

I’m having this exact same issue despite my account having the entire month available in budget, being $120 and being on Usage Tier 2.

All of my workflows using make(.)com were working perfectly fine until around the 28-29th of January 2026.

I’ve tried creating new API Keys and nothing works.

It got fixed by topping up the account with an extra $20, which got us to Tier 3. But it still makes no sense since it had spent zero from the $120 monthly budget.
This doesn’t really foster much trust on any system built with OpenAI.