Hey all, we are thrilled to share that the ChatGPT API and Whisper API are now available. We also shipped a new data usage guide and focus on stability to make our commitment to developers and customers clear.
This is great! Any ideas on how we would use embeddings with the new 3.5 version? or Uploading text files to train it on? the docs seem to indicate that it’s not possible on this model?
Love it! My HuggingFace implementation of Whisper went kaput about a week ago, so fell back to AWS Transcribe (which sucks!). Looking forward to using the API!
It’s not pre-moderated. I just closed all my tabs but one of the docs specifically mentions at the end that we still need to run it through the moderations endpoint first
This is excellent news. One question, we are using “code-davinci-002” for code generation prompts. Will gpt-3.5-turbo have code capabilities, or do we still need to use “code-davinci-002” for code?
This is truly awesome. Congrats to all the team @logankilpatrick. I have been trying it and we’re definitely switching to turbo 3.5, even for non-chat use cases. There are two things I’m particularly amazed by:
A cost reduction of 90% in just three months, with an increasing number of users. Truly fascinating. Suddenly, all our bills are reduced by a factor of 10 while maintaining the same generation quality.
The new ChatML language. It’s very clever to defend against prompt injection. And it makes way more sense from a conversational point of view. We still need to manage state and sessions on our end, but I was expecting that.
You guys rock. No wonder why everybody is so thrilled with you.
I’m curious regarding the metadata and roles. I’m assuming the ChatML documentation isn’t exactly useable yet as I cannot change roles or add metadata.
Will there be a possibility for dynamic roles? For example, if GPT notices a question, it can set the role as QUERY for the back-end server to pick up and process. As of now it seems that we can set one of 3 roles, and that’s it.
Or is this all part of a future update regarding metadata? It does seem like ChatML is for preventing third-party prompt injection, however I’m a bit sad that I can’t “prime” an answer using partial completion.
To add, as of now I am making GPT prefix the response with a keyword and unique separator for my server to grab and perform logic on as such:
{“role”: “user”, “content”: “Hi. My question is: When was the ChatGPT API released?”},
{“role”: “assistant”, “content”: “QUERY|Can you provide information about the release date of ChatGPT API?”},
Which QUERY| is caught by my server, and outputs the results as a system message
{“role”: “system”, “content”: “ChatGPT API was released March 1, 2023”}
Obviously this still opens me up for injection methods.
I’m assuming this will be applicable in the future with metadata?