500 Error Loading Vector Stores

Unsure if related to this specific issue, but we’ve observed extreme slowness performing Vector Store operations via API for the last few days.

Unlinking files makes them become stuck “in progress” for extended periods. Some files failing or taking a long time to upload.

As of this morning, we’re unable to view our vector stores through the OpenAI portal at all. OpenAI is throwing a 500 error on their own dashboard:

Network tab:

Request URL
https://api.openai.com/v1/vector_stores?limit=10
Request Method
GET
Status Code
500 Internal Server Error

I tried manually creating a new vector store through the UI, and I’m unable to view it at all. Still getting the 500 error in the OpenAI dev dashboard.

This seems to be a symptom of a broader OpenAI Vector Store issue that’s been ongoing for at least 24 hours.

Anyone else having similar issues?

I went through the full cycle of making a vector store and using it. Seemed to be okay, with just the tool addition with vector store ID to a “prompt” taking a while longer to retrieve what should be instant.

Still failing to fetch the vector stores on my end:

“load” via the API: this Python script will retrieve by steps of 5 for the query string limit. You can see if it breaks at one size or one potential bad database error, or even set the stepping lower than that.

# pip install httpx + OPENAI_API_KEY env var
import os
import sys
import httpx

API_KEY = os.getenv("OPENAI_API_KEY")
if not API_KEY:
    print("Set OPENAI_API_KEY environment variable.", file=sys.stderr)
    sys.exit(1)

HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "OpenAI-Beta": "assistants=v2",
}

def main():
    url = "https://api.openai.com/v1/vector_stores"
    consecutive_under = 0

    with httpx.Client(timeout=30) as client:
        for limit in range(5, 31, 5):
            try:
                resp = client.get(url, headers=HEADERS, params={"limit": limit})
            except Exception as e:
                print(f"limit={limit} request_error={e}")
                break

            ok = resp.status_code == 200
            if not ok:
                print(f"limit={limit} status={resp.status_code} body={resp.text[:200]}")
                break

            data = resp.json()
            items = data.get("data", []) or []
            ids = [it.get("id") for it in items if isinstance(it, dict)]
            count = len(ids)
            has_more = data.get("has_more")

            print(f"limit={limit} status=200 count={count} has_more={has_more} ids={', '.join(ids) if ids else '(none)'}")

            if count < limit:
                consecutive_under += 1
            else:
                consecutive_under = 0

            if consecutive_under > 2:
                print("Terminating: more than two successive under-limit results.")
                break

if __name__ == "__main__":
    main()

Hitting the same error with your script:

Updated it to start with limit 1:

~/developer/openai-bug-testing➔ source venv/bin/activate && python test_vector_stores.py
limit=1 status=200 count=1 has_more=True ids=(MY_VECTOR_STORE_ID)
limit=2 status=500 body={
  "error": {
    "message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seein
~/developer/openai-bug-testing➔ 

The only conclusion is that it is down again, another partial vector store outage. For those who need more reinforcement learning to get off this platform.

An edge case would be organization-related instead of your edge hosting location: if your database query for listings hits a corruption in your organization’s vector stores. The last thing to persist in your own attempts is to change the additional query parameter ?order=asc, or list an individual known vector store ID by API, solely to provide OpenAI a “what to fix”.

The script seemed to fail at vector store 2 - it gave you one of them. If you have a record of what that would be, your second-most-recently created vector store, you could start sending DELETES.

Changed it asc and it didn’t hit the 500 error until much further down:

~/developer/openai-bug-testing➔ source venv/bin/activate && python test_vector_stores.py
limit=1 status=200 count=0 has_more=True
limit=2 status=200 count=0 has_more=True
limit=3 status=200 count=0 has_more=True
limit=4 status=200 count=0 has_more=True
limit=5 status=200 count=0 has_more=True
limit=6 status=200 count=1 has_more=True
limit=7 status=200 count=2 has_more=True
limit=8 status=200 count=3 has_more=True
limit=9 status=200 count=0 has_more=True
limit=10 status=200 count=5 has_more=True
limit=11 status=200 count=6 has_more=True
limit=12 status=200 count=7 has_more=True
limit=13 status=200 count=8 has_more=True
limit=14 status=200 count=9 has_more=True
limit=15 status=200 count=10 has_more=True
limit=16 status=200 count=11 has_more=True
limit=17 status=200 count=12 has_more=True
limit=18 status=500 body={
  "error": {
    "message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID req_c50ce1574ee3153bf5c452557ec1c5e7 in your email.)",
    "type": "server_error",
    "param": null,
    "code": null
  }
}
1 Like

However, if the count is correct - you request 10 and get 5 results, or then 9 returns 0 - it is still a messed-up API database, either your org or the platform. You can finally see if that pattern of return counts is replicable or happenstance.

This will take OpenAI repair.

You can see if you can live with self-management of IDs; creating a vector store, logging, retrieving, and ensuring you have a container you can use. You have to live with not being able to list response IDs or threads or conversations or stored prompts over the API, after all.

Im having the same error, I try to list my vector stores and it’s giving me 500 error, I can’t also see my vector stores on the dashboard

1 Like