Seeking feedback on our "unofficial OpenAI status dashboard" site

Hi! Building this unofficial API status site for devs: url is openai-status dot llm-utils dot org. (I can’t post direct links)

What improvements would be most critical?

Current ones we’re working on: improving page load time, gauge design to replace table to be simpler to read, standard distribution graph.

What do you like about it, and what would most help improve it?

5 Likes

Welcome to the forum!

Note: Do not give this post a like (heart), give it in the first post as Clay created the unofficial status site, this post is here because as a new user Clay could not post links or images but as a moderator I can.

4 Likes

By the minute: time to return 7000 GPT-4 tokens.

This is cool.

So the official status doesn’t seem to capture non-catastrophic but still elevated rates of errors or slowness. This unofficial OpenAI status page fixes that.

The official status page is updated manually I believe.
So when there is an outage it won’t help to check that right away because it requires some staff intervention to update.

I could see your service being valuable if it was offered as a paid API. So long as you did global tracking. Programs could ping your service for network health and not make calls unless the network health was in a certain parameter.

Oh wow. Cool website!

1 Like

It would be nice if it could check the headers of an test ChatGPT plugin daily and note changes such as header values being added and/or removed.

It would be of benefit for questions such as this

1 Like

The users of ChatGPT are also in need of your services, both free and paid.

Another suggestion.

With regards to

if every week several different prompts that have different types of results and/or focus were given to the models and then recorded, then there would be a factual record of the changes.

I tend to think the general idea that the models change with time is valid but there is a lack of recorded evidence over time.

@EricGT Your comments have been super helpful, and I saw you mentioned the site in another thread, thank you! We’ve launched on HN now

As I often note for such thanks, you did the work, the thanks goes to you. I am just passing along something of value with a simple link.


Hacker News discussion thread

The problem with bot monitoring ChatGPT generation is the usage runs afoul:

You may not … (iv) except as permitted through the API, use any automated or programmatic method to extract data or output from the Services, including scraping, web harvesting, or web data extraction;

OpenAI has put in place a CAPTCHA for plus users to break those automatons that would try to even utilize ChatGPT to the capacity that API gives or the model access that API currently denies them.

For others needing the source of that info

https://openai.com/policies/terms-of-use

  1. Usage Requirements
    (c) Restrictions
    (iv) except as permitted through the API, use any automated or programmatic method to extract data or output from the Services, including scraping, web harvesting, or web data extraction

We are working on a fix - thank you for reporting.

1 Like

The issue you mentioned has been fixed now - thanks for reporting!

I’ll start working on my unofficial “unofficial OpenAI status dashboard” status dashboard…

Thanks!

I use this about 20 times a day.

1 Like

I have my own private version. It’s event driven, so not pretty graphs like this site which is produced at regular intervals.

But one metric I measure, that this site doesn’t, is output tokens per second.

This is measuring the overall generation performance of the model.

So … add output tokens per second!

My 2 cents :sunglasses:

2 Likes

Wow, that post from a while ago :rofl:

But basically, I have a database that records all the API responses, including the time taken per response and tokens used.

And so you can just graph this data offline to see your performance, for example: output tokens per second, which is important.