Why do I need to set it explicitly, the completion token even not reach 4k
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
O1-mini model output '' with finish_reason of length | 1 | 376 | November 12, 2024 | |
Simple Request exeeds Completion Token Limit of 4096 Token; GPT-35-turbo | 3 | 1116 | November 22, 2023 | |
O1-preview failing in Postman | 4 | 206 | September 18, 2024 | |
Inconsistent Token Limits with “o3-mini-2025-01-31” Model—Empty Response Despite Supposed Large Context? | 2 | 741 | March 4, 2025 | |
How to access o1-preview model? | 2 | 53 | March 31, 2025 |