I tried making some applications that uses the ChatCompletion of the openai api. I proceded by testing the code before on jupyter notebook, then on streamlit or fastapi depending on the app, for deployment.
I first tried them in jupyter notebook and all seemed ok.
Sometimes (probability of 20%) an error occoured: APIConnectionError: Error communicating with OpenAI: (‘Connection aborted.’, ConnectionResetError(10054, “Connessione in corso interrotta forzatamente dall’host remoto”, None, 10054, None))
or some similar errors all depending on the connection with openai.
Then i tried using a streamlit api, and wow ,0% probability of errors, without having any exception management. I hosted the app on azure virtual machine and all works fine.
Then i tried hosting the same app but making it as an api endpoint, using fastapi, and here goes the strange thing, 50% probability of error (one query every two fails on average), on the same azure VM.
I can assure the core code is exactly the same, the only thing that changes is the graphical part, that is absent in fastapi.
I can assure i have a good statistical basis to affirm this: since we use it for service for companies, we tried them almost a hundred of times in different conditions.
I can hypotize there are some issues due to working differences between streamlit and fastapi, but i would like to understand more about this, spooky effect .
Thanks to everyone can help me.