Chat Completion tab on platform isn't loading

Hello,

I’ve been trying to load logs on https://platform.openai.com/logs?api=chat-completions but it’s not loading for me. (no content - just shimmer loading)

Could anyone from openAI team please check this?

I can see the from the network request:

Request Method GET Status Code 200 OK

But the response is empty. I’ve been using chat-completions heavily lately.

Logging of Chat Completions is only when you make a request with “store”:true - or now this unwanted data persistence is foisted on new accounts.

What it looks like with proper API configuration and no calls being snarfed up:

That boilerplate appears even when it is possible to store by API call parameter.

Setting the entire organization to log for all projects, you get different text:

Me making those screenshots was after making Chat Completions calls that should have appeared. The API calls are not appearing. I don’t see churning nothing, though: check what “responses” looks like, with its logged calls persisting forever unless you logged IDs and manually delete them.

The logs not working, not showing API calls for days on end, taking a week to get a “we implemented a change” fix of what was magically broken, has happened multiple times in the past, and as expected, happens again now, proving that you should not trust this feature nor invest time in calls for “evals”, just as nobody used the poor “distillation” that was also to be based on logged API calls.

It was working and loading fine, it’s just not working any more.

Welcome to the forum!

Besides what’s already been explained, I’m also adding these docs, so you can check if any of these might be of help.

Stored Chat Completions API reference:

Migration guide mentioning storage behavior for Responses / Chat Completions:

Data retention / Zero Data Retention controls:

From those docs, I’d check:

  • whether your requests are actually being made with store: true
  • whether API call logging is enabled for the org/project
  • whether Zero Data Retention is enabled, since that would make store behave as false
  • whether the same issue happens in another browser/incognito, in case this is only a Dashboard UI loading issue

If stored calls should exist but the Completions tab keeps shimmer-loading or returning empty, then it sounds like something Support/OpenAI staff would need to inspect.

Add: Just adding one more data point here: on my side, the Logs → Completions tab is currently loading and showing stored entries.

One extra simple thing to check in Playground: in the model/settings panel there is a Store logs checkbox.

When I leave Store logs unchecked, the Chat Completions call does not appear under Logs → Completions for me.

When I enable Store logs, the new completion appears there immediately.

So it might be worth checking that toggle too, if you are testing from Playground.

12 hours later and still no evidence of API calls made with gpt-4.1 specifically to have them logged appearing.

The service appears broken again, for those that would want it.

I manage to get my calls with gpt-4.1 logged.

That’s so weird. Hopefully someone will take a look at it.

Did you tried loading it with thousands of requests/completions? because I think that’s the problem. It loaded fine when the requests were below 100. now I have ~5k request..and it just showing empty page with shimmer loading.

I’ll need to test with 100 and see if I bump into same issue.

But have you contacted support? Since there seems to be an issue.

I tested this with 100 stored Chat Completions, and for me the Logs → Completions tab still loaded normally and showed the stored entries.

So at least in my case, 100 stored completions did not reproduce the shimmer-loading/empty page issue.

Of course, that does not rule out a scaling issue around thousands of stored completions, since your case is closer to ~5k requests. But it may suggest the problem appears only at a higher volume or may be account/project/browser-specific.

If it still doesn’t appear for you, I’d suggest contacting the Help Center and opening a support ticket so they can investigate it on the account/project side. From my test, I can’t reproduce it at 100 stored completions, so your case may require them to check the specific project/account behavior.

https://help.openai.com/en