Hi! We currently rely on the Completions endpoints for logprobs, which is currently absent from the Chat endpoints. Are there plans to bring logprobs over to the new endpoints? logprobs are extremely important for us and used within our product experience.
We’re working on closing the last few remaining gaps of the Chat Completions API quickly, such as log probabilities for completion tokens [source]
Thank you, totally missed that. Thank you
idk how but i responded to the wrong post…
hi @novaphil, just wanted to check-in on the status of this? any updates or estimates?
I would also love to see the
longprobs functionality get added to chat completions!
In the completion endpoint, you can now use
gpt-3.5-turbo-instruct. It will have different skills and need different prompting techniques than chat models, but gives a bit more insight into how the newest models “think”.
It will be made available!
Bump ^! Is logprobs expected to be available in 2024?
Bumping this up!!
We are dependent on logprobs with a fine-tuned model (based on the earlier /v1/fine-tunes API). Since this endpoint is being deprecated in Jan 2024, we wish to migrate to /v1/fine-tunes/jobs API with GPT-3.5-turbo and the logprobs feature. We would greatly appreciate if logprobs are included before Completion or /v1/fine-tunes APIs are deprecated.
Just wasted half a day rebuilding my chains just to see that I can’t use logprobs (for flare) with gpt4 hahaha. Please, patch it. Thanks!
Hang tight! We are still working on this and have not forgotten.
Good. Does this include restoring prompt logprobs? The new error message on the Completions endpoint is not promising in that regard.
We rely on this for a core use-case. We are gradually exploring shifting to open-source models, but losing prompt logprobs would mean accelerating that timeline.
Related conversation: Why will gpt-3.5-turbo-instruct no longer support echo=true and logprobs=1?