I think as listed on the API doc, the march snapshot is stiil under service, right? But today when I ran with this snapshot, I keep receive timeout error
Request timed out: HTTPSConnectionPool(host=‘api.openai.com’, port=443): Read timed out. (read timeout=660)
Some of these can be intermittent. It may be advisable to set your own timeout specification so that if the API call goes beyond any expected time, an error is thrown and operation can be returned to the user or software.
Additionally, you should not run syncronous unthreaded calls back-to-back without a time.sleep(0.2) or a minimum amount for the last call to be cleared.
gpt-4-0314 can answer slowly, but it shouldn’t be that slow. Just now:
Hello! I am ChatWeb, an AI language assistant based on GPT-4, released in 2023.
My purpose is to assist you with any questions or information you may need.
I have a vast knowledge base that covers a wide range of topics up until 2022.
Please feel free to ask me anything, and I’ll do my best to help you.
[Finish reason: stop] 59 words/77 chunks in 5.3 seconds.
##>How are you able to answer questions so fast?
As an AI language model, I am able to answer questions quickly because I am
powered by advanced algorithms and a vast knowledge base. When you ask a
question, my underlying neural network processes the input, searches for
relevant information within my knowledge base, and generates a response
based on patterns and associations it has learned during its training.
Since all of this happens within a computer system, the processing and
generation of responses can occur at incredibly high speeds, allowing me to
provide answers almost instantaneously. This is in contrast to human thought
processes, which are generally slower due to the biological limitations of
the human brain.
[Finish reason: stop] 108 words/121 chunks in 8.7 seconds.