Noobie here. Heads-up: Not a developer, no-code for the past 4 months and experimenting with ChatpGPT API calls using Postman.
So, my question is the following:
I have done an API call for ChatGPT, and got a response. I am using Wized, form, in a way that is very similar to Postman, in terms of how it works and how I send the request.
Now, I want to send a follow-up on that response. Let’s say, I want to tell him “Make it shorter, please [so when the machines rise, I will be spared]”.
In the ChatGPT web app, it’s all in the same text box, so I know how to do that. But when I do an API call, I don’t know how to reference the previous response I got. I understand every API call is a unique one. So when I tell him “make it shorter, please” - he does not understand what I am talking about… obviously… “Make what shorter, human?”.
I hope I am clear, and I hope this is a very simple basic thing. Maybe I just need to add the previos response in the new request for context or something… I don’t know…
Hopefully this code snippet will shine some light on it for you
import openai
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
Essentially you need to append the past replies with the assistant role, also append the last question.
example
import openai
# Create an empty list to hold the messages
messages = []
# Append each message as a dictionary to the list
messages.append({"role": "system", "content": "You are a helpful assistant."})
messages.append({"role": "user", "content": "Who won the world series in 2020?"})
messages.append({"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."})
messages.append({"role": "user", "content": "Where was it played?"})
# Use the list of messages in the ChatCompletion.create() function
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages
)
# You can then access the response data as needed
print(response["choices"][0]["message"]["content"])
As @Foxalabs suggest in the previous comment, the API will not have a sort of ‘memory’ when it comes to the previous call, so you will have to pass on the information to the next API call.
The best suggestion i could give here would be to break the whole process into 2 calls, one to ask the question and get the response back and the other which sends the response back to GPT and you can ask it to condense/shorten it.
This would allow you to more finely control what and how GPT outputs the final output
One thing I will say, and this is just my opinion, but No-Code is actually coding but with a syntax that is non transferable. Sure some of the detail is obfuscated, and it’s nice not to have to mess with the html, css, js, php, etc,etc but it’s surprising how quickly you can pick those up. The skills you need to make No-Code actually do something complex are the same skills you can put into a fairly easy to pickup language like python, then you have a valuable, transferable skill that can build anything.
If No-Code is working for you and you are seeing great, usable end products for little effort, great, but my guess is you are needing to do things a little more complex and are having issues.