Gpt-3.5-turbo-instruct Price?

Is the turbo instruct the same price as GPT 3.5 Turbo? The email I received says it’s “in line” with the other GPT 3.5 Turbo models. Has anyone tested it to see the costs?

I’m guessing it will fall alongside davinci-002, $0.0020 / 1K tokens, without the chat endpoint model’s 25% discount on input.

Inspect element on your usage bar chart for the day will show many more digits of daily use cost. One could send the model 3999 tokens in, 1 out, and discern what the input costs. You’d have to not use the API for anything else but your price probing, and then see the increase.

(edit: answer below)

Hi,

Same price different functionality, the best way to describe the difference is with an example:

GPT-3.5-Turbo

Prompt : Dogs

Reply : Dogs are domesticated mammals known for their close relationship with humans.

GPT-3.5-Turbo-Instruct

Prompt: Dogs

Reply: are considered to be man’s best friend for a reason – they are loyal, loving companions who provide endless amounts of joy and companionship.

One is tuned to look at the prompt and produce a reply that would be typical of a person responding to a question or request for information (3.5 standard) the Instruct model will “complete” any given prompt with what it considers the next logical group of words to continue that prompt, so they each have their uses, mostly for legacy use at this stage as the standard model can do both with a small amount of additional prompting.

2 Likes

Great summary.

I’d like to say that I sometimes prefer the completions method as it gives me the ability to “lead” or “prime” the response. Although usually we can structure the response using the system prompt I do like being able to get it going.

I’m also hoping that this instruct model is less chatty, and little more risky (so I can bully John Doe)

Here’s hoping that they continued the spirit of “hold my beer” Davinci

I think a good example would be for an email response.

Context: [context]
Instruction: Kindly reject Frank’s Hot Dogs from our grocery chain using the context.

Hey Frank, love the hot dogs. Unfortunately,

1 Like

I did that.

gpt-3.5-turbo-instruct, 5 requests
20,000 prompt + 5 completion = 20,005 tokens

Price: $0.03001

= $0.001500 / 1k. So the same for input as chat gpt-3.5-turbo.

1 Like

Aww yeah… We can bully again using GPT

Pfff move aside cGPT.

Screenshot from 2023-09-18 21-05-45

It’s all coming together (I’m so happy there is finally a fair priced instruct model :smiling_face_with_tear:)

1 Like

Hey, looks interesting… this model support 16k or not yet ?

1 Like

One might conclude from the lack of -16k suffix, no.

This model's maximum context length is 4097 tokens, however you requested 17005 tokens
3 Likes

@Foxalabs Will the instruct model have 16k+ context length in the future

Hi and welcome to the Developer Forum!

The instruct model was developed to allow users of the legacy competition models to have an upgrade path come January next year when they are depreciated, as those older models did not have a 16k context it is unlikely that it will be added. There is always a possibility that the model is very popular and demand is high and that may be reviewed, but not at the moment.

1 Like

Start a new topic if you need help. This question is way off-topic.

1 Like