ChatGPT API integration on Apps for devs

I think that the API for any model before GPT-4, should return any type of value or a specific answer when the model is unable to answer the question (like for e.g; about events after 2021, and some other stuff it doesn’t know) because in this way devs could haver more control about how to handle this cases, like using another resources for the wanted information.

1 Like

Hey Cuzzo,

It would be nice, but that will be a bit tough to embed in the model. What you can do though is put instructions for the agent to stabilize and make their responses more predictable. There are a lot of good experiments going on with stabilizing input and output.

You can use step by step reasoning, two shot examples, etc. There is a whole slew of techniques. But it is up to the user to implement it. Here is a list of techniques you can use to help stabilize your agents.

:balance_scale: Reliability | Learn Prompting: Your Guide to Communicating with AI

4 Likes

Thanks Codie, I will check it out.

2 Likes

Your suggestion of having the API for models like GPT-3 (and presumably GPT-4, if it exists) return a specific value or signal when it is unable to answer a question is a valid one. In practice, how an API handles such situations can depend on several factors, including the use case, the design philosophy of the model, and the API provider’s choices. Providing a specific signal or value when the model can’t answer a question would indeed give developers more control over how they handle such cases. This can be valuable because it allows developers to create a more user-friendly experience by providing alternative responses or gracefully handling the situation.

1 Like

Hey Cuzzo,

Something you could potentially do is preprocess the information so that if I ask a question about idk…“What did Joe Biden do during 2022?” - It could run that message first through some sort of database with updated news, and then if it does find anything related to that topic, it can inject that into the prompt.

This might be hard and i dont know how easy it would be to do this but just throwing out ideas. Its definitely possible. You just have to take the users message and send it somewhere else before sending it to through gpt-4. Then when it hits gpt-4 you inject with string literals into the prompt the response to the events the user was asking about.

1 Like

thanks. actually right after i made this post i developed a method that allows me to do something like that. it just adds a little delay in the response given to the user but it is a nice approach

i just took a deep dive into understanding this models and i just realised that returning some type of confidence score, depending on the answer given by the model, would actually be more appropriate.

(as devs we can also do this with some post processing, but if the API was able to gives us, the answer and the confidence score it would probably be already optimized )

you can use Google NLP for pre-processing they have different sentiment analysis tools that might help with what you want

i made something way easier, but i will take a look at that