When I make call via the API will the whole predefined context be billed for every request?

I want to simply use chat gpt to give me a machine readable output for simple commands. Example:
“let go the stern line” will output “release stern line”,
I will define a context where I describe the most common terms. This actually works, I tried it out in the web chatgpt interface.
My question is now, when using the api, do I have to send my whole description with every request or is there a way to save that context?
At the end I want only send the commands. How will this reflect in the tokens?

There is no internal storage or memory of API AI models. You must past the entire list of instructions, or even past conversation, each turn.

Here’s some text. It would cost $0.000548 each prompt to send to gpt-3.5-turbo; 275 tokens.

Set course to 270 degrees
Hoist the main sail
Lower the jib
Trim the foresail
Drop anchor
Man the helm
Check the starboard side
Inspect the port side
Secure the halyard
Tack to port
Jibe to starboard
Deploy the sea anchor
Coil the lines
Reef the mainsail
Raise the mizzenmast
Lash down the cargo
Swab the deck
Splice the mainbrace
Heave to
Prepare the lifeboats
Reef the topsail
Ready the binnacle
Lubricate the winches
Sound the foghorn
Steady as she goes
Swab the poop deck
Keelhaul the scallywag!
Rig the davits
Sheet in the jib
Check the bilge
Man overboard drill!
Batten down the hatches
Ready the sextant
Strike the colors
Weigh anchor
Set the staysail
Cast off the lines
Heave the lead
Navigate by the stars
Secure the cannons
Man the capstan
Repel boarders!
Strike the mizzenmast
Sound the ship’s bell
Turn hard to starboard
Prepare the longboat
Strike the foremast
Signal with semaphore
Hoist the courtesy flag
Steer by dead reckoning

2 Likes

The API is stateless, meaning it doesn’t offer a memory for you to store the context you are referring to and you need to send it with every request.

You can try to make your context as short as possible to save some costs.

1 Like