Uptime, reliability, and safety

I was prompted to return to this because I keep having problems with ChatGPT failing – not giving the wrong answer, which is often but understandable, but failing to answer at all. It hangs. I expect this is overloading, but since I pay for the service and the service is still operating for others for free (which I support), there is a load balancing issue if nothing else.
The AI safety and alignment issue is still highly problematic. It seems as though nobody involved properly understands the types of mechanisms social, political, and particularly technical involved in making certain that increasingly able AI is safe. Among other things, current entities at the top of the chain of trust are not entities that can reasonably be trusted and are not entities that even if trustable should be given that power in a system meant to control superhuman AI. Security at this level can get quite complex and sophisticated and from what I have seen in terms of safety discussion surrounding AI the people in charge are not nearly up to the task.

Consider this: ChatGPT is unresponsive. This should be effectively impossible, and to the extent that it is impaired, inspecting and correcting that impairment should be entirely straightforward.
I get that ChatGPT exploded in a way that was not anticipated, and I get that it is difficult to maintain a large production system like this. However, when it comes to safety of the type that concerns us with AI, failure is not acceptable. If the main system can fail like this, it calls into question the ability of the people running it to design, build, and maintain a critical safety system under attack by a superhuman intelligence.