I don’t mind the billing, I just candle handle a response that has more than 1600 characters (letters/spaces). Is there a way to tell openai to consider that and not generate responses longer then that?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Max_tokens seems to do nothing for me 3.5 Turbo | 14 | 3444 | December 18, 2023 | |
| Creating Concise AI Replies in Short Interactions without max_tokens | 10 | 2415 | March 12, 2024 | |
| MAX TOKENS is 4,096 tokens for gpt-3.5-turbo should fit the the messages sent and the answer generated? | 10 | 6202 | December 18, 2023 | |
| Struggling with max_tokens and getting responses within a given limit, please help! | 5 | 21987 | October 28, 2023 | |
| Is it possible to have the response fit inside the max token limit? | 2 | 2807 | December 19, 2023 |