Where is the video on prompting over the new message calls? I am having a lot of trouble making the switch from davinci-003 to turbo.
I am not very savvy in programming but I managed to stumble through it and made a functioning product with davinci-003 that works well. I tried switching to the turbo endpoint but it just won’t work. I’ve been trying for days. If I can somehow manage to make turbo work so I can test it then it would be nice to bring the cost down 10x, with would make my venture much more viable since I am not sure that the current cost of Davinci will even be sustainable with the ratio to revenue possibilities.
The way my prompt currently works is pretty straight forward and it works with the Davinci API endpoint with no issues and I am happy with it for the most part, this is how the prompt works:
const prompt: I enter the prompt instructions here, then add... You previously said "${previousOutputs.join(' ')}", and then the user said "${input}"\n
;
const response = await fetch(https://api.openai.com/v1/engines/text-curie-001/completions
, {
method: “POST”,
headers: {
“Content-Type”: “application/json”,
“Authorization”: Bearer sk-APIxxx
},
body: JSON.stringify({
prompt: prompt,
temperature: 0.6,
max_tokens: 1000,
That pulls the previous output, and combines it with the new user input in the input box, and sends it as once nice prompt (with instructions that I don’t need to explain here).
I have tried many, many different approaches. I feel like this should work but it doesn’t:
const prompt: I enter the prompt instructions here, then add... You previously said "${previousOutputs.join(' ')}", and then the user said "${input}"\n
;
const response = await fetch(https://api.openai.com/v1/chat/completions
, {
method: “POST”,
headers: {
“Content-Type”: “application/json”,
“Authorization”: Bearer APIxxx
},
body: JSON.stringify({
model: “gpt-3.5-turbo”,
messages: {“role”: “user”, “content”: prompt},
temperature: 0.6,
max_tokens: 1000,
I have tried putting the actual prompt in the content instead, and every variation of parenthesis, but it just doesn’t work. I have been to a point where I got the request to send to the API and a completion would happen (on the OpenAI admin side I saw it), but it caused other typetext errors and it wouldn’t post to the output, don’t ask me why, I couldn’t figure that out. I just find it strange that making a few small changes like the endpoint URL and switching the prompt to the messages would make such a huge difference and break it beyond repair.