i’m using the gpt-3.5-turbo model. I am using prompts in the model and the token count is increasing. The slowdown is increasing in direct proportion to the number of tokens. How can I deal with this, it’s too slow to be used.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Gpt-3.5-turbo-0613 model speed problem | 0 | 351 | November 3, 2023 | |
OpenAI is getting slower and slower | 2 | 555 | November 2, 2023 | |
Chatgpt-3.5 turbo model takes long time to respond. Is there any way to speed this up? | 7 | 6551 | December 19, 2023 | |
Is there an issue with GPT 3.5 turbo 16k? | 5 | 932 | October 27, 2023 | |
GPT 4 API taking more time to render things asked through prompts | 1 | 536 | September 14, 2023 |