As expected, I plan to deploy the AI service on the backend (to prevent users from bypassing payment), but the time it takes for the AI to return results is quite long, leading to high concurrency on the backend server. I want to use polling to address this issue, but I didn’t find similar operations in the documentation. I am looking for a solution.
Related Topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Asynchronous version of the API? | 2 | 1492 | October 17, 2023 | |
Content Moderation Question | 0 | 484 | December 19, 2023 | |
HTTP Calls Excessive Delay Waiting for Server Response | 8 | 968 | January 9, 2024 | |
Do this proposal for decresing concurrent avaliable when using openai api | 1 | 33 | August 22, 2024 | |
Need help. Timed out request when hosting web app on heroku | 9 | 1266 | December 22, 2023 |