With the GPT-4 8k token Api, it being stuck to the standard model response size limits its usefulness. I have been using the 8k token model and it has been great for data analysis, but it being stuck at the same response size as the other models limits it. For example, if I gave it a data set to clean of some noise, it would be unable to respond with the clean version without getting cut off. This would not be a problem if there was a way to allow it to continue without having to prompt it to do so. I have noticed that when prompted to continue, even when giving it a point to continue from in the prompt. It tends to ignore all previous formatting style or rules established.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Soft Token output limits and worsening performance | 0 | 323 | April 14, 2024 | |
How to complete Long API responses? | 6 | 4261 | December 19, 2023 | |
Impossible to generate texts of more than 600 words | 5 | 2879 | December 18, 2023 | |
GPT-4o-mini max token 16,384 | 2 | 499 | August 31, 2024 | |
How do I get gpt to throw out more tokens in API? | 3 | 1805 | December 16, 2023 |