Then, to confirm, I entered gpt-3.5-turbo-16k into my API settings. And lo and behold!
So, in my particular use case, I found gpt-4 the best model, but far too expensive for daily use. I reluctantly fell back to using gpt-3.5-turbo which mostly worked, but kept giving me headaches because of the 4K context window.
Today, OpenAI solved all of those problems! Now, I’m sure other issues will arise as we continue on this journey, but today, I am a Happy OpenAI Camper!
Also, from prior announcement, today was the day for them to have pulled the plug on the checkpoint models like gpt-3.5-turbo-0301, but they are continuing to stay up through September 13.
These use cases are enabled by new API parameters in our /v1/chat/completions endpoint, functions and function_call , that allow developers to describe functions to the model via JSON Schema, and optionally ask it to call a specific function.
explains a ChatGPT prompt from earlier today which is noted in this topic.
Thank you for the heads up. I just tested the function calling and it works as expected. I have to refactor my codes now to remove the blocks that mimic this same function. This really made my day!
A better way to say thanks than to create a reply is to click the heart at the bottom of a reply.
This tells the person that you liked the reply without requiring the person to read another post.
It also shows others that
the reply is useful
this person is giving useful information
Hearts are like the currency of Discourse forums. While you can’t actually spend them they do count toward maintaining trust level 3 , so please consider clicking a heart when giving thanks.