Can you achieve the same results using an API instead of GPT-4? (to make it cheaper)

I was wondering if anyone uses API instead of GPT-4. I don’t use Dalle or GPTs.

So, I pay $20, and I’m sure I don’t use it to its full capacity.

I’m wondering if anyone has used a chatbot template connected with OpenAI API?

If yes, how much do you pay? Do you use it daily, and what’s the quality like?

Yes, one can build a chatbot that performs similarly to ChatGPT.

You also don’t need to “chat”, you can align the AI specifically to perform processing tasks.

In fact, because you have a variety of quality models (vs the latest with serious drawbacks and continuing faults), you can chose a model that is performative at a particular task.

Here’s a sample preset from the API playground of instructing a chatbot (and despite the system prompt, the answer shown is from gpt-3.5-turbo-0613, as later model outputs were poor).

An API account funded with a prepayment credit would be able to continue to interact with the AI, submitting more messages.

1 Like

Do you utilize such solution?

Do you know what’s approx costs if the costs are the same if I use API?

The amount you pay per API call can vary drastically depending on your inputs and your management of chat information.

Here is a basic input, with no memory of prior chat:

With the GPT-4-Turbo AI model costing $0.01 per 1k tokens in, and $0.03 per 1k tokens out, sending 100 tokens is 1/10 of a cent. Then you pay for how much the AI writes.

However, that input can grow vastly if you are maintaining a long chat history of past interactions, are processing multiple documents, hundreds of lines of computer code, or placing other information into context, up to $1.25 per API call at the max of the longest context length gpt-4-turbo model (or the very limited release gpt-4-32k: $2.88 for 16k in 16k out). It depends on your use, as you pay per data.