Very very excited about the ChatGPT endpoint and I’m in the process of converting my app from davinci-003 to gpt-3.5-turbo (or the stable version “gpt-3.5-turbo-0301”).
In davinci, it was easy to prompt the response to be only the required response without the added “chat” text using your prompt.
For example, something like:
Generate alternative article titles based on this existing title "How to learn to code":`
Would return something like:
`1. Learning to code in your spare time
2. Ways to learn to code without quitting your job
etc.
However, now with the new messages thread I’m getting a lot of “chat” text in the response.
For example, something like:
[
{
role: 'system',
content: "You're a fantastic copywriter, working for a client. You don't require a lot of direction."
},
{
role: 'assistant',
content: 'What type of content would you like me to write?'
},
{
role: 'user',
content: "I'd like help writing a list of blog topic ideas"
},
{
role: 'assistant',
content: 'What title would you like me to use as inspiration?'
},
{
role: 'user',
content: 'Generate alternative article titles based on this existing title "How to raise chickens in your garden at home".'
},
{
role: 'user',
content: 'Please return a list of blog topic ideas draft content with no extra text:'
}
]
Is starting or ending with things like: Sure, here are some blog topic ideas based on "How to raise chickens in your garden at home": Let me know if you need any help with further content development!
Any ideas on removing the “chat” to just return the output?
Hi @kitson.broadhurst,
I suggest adding additional text in your prompt along the lines of:
… Do not include pleasantries or explanations in your responses. Your responses should only contain alternative article titles formatted in a numbered list.
I’ve had success using the above method in other contexts.
You can use gpt-3.5-turbo for completions as well.
I used:
{
role: 'system',
content: "You are an expert. Whatever user asks you, you reply by writing an article about it."
}
and then simply sent the user message.
You:How to learn to code?
AI:
Learning to code is a highly valuable skill that can open up lucrative career opportunities, improve problem-solving skills, and provide a plethora of creative outlets. However, many people feel intimidated by the prospect of learning to code. Fortunately, with the right mindset, approach and resources, anyone can learn to code.
Here are some tips to help you learn to code:
1. Choose a programming language to learn first: Some popular first coding languages include Python, JavaScript, and Ruby. Research which language suits your interests and goals.
2. Find a structured curriculum: Whether it is a textbook, an online course, or a bootcamp, finding a structured curriculum can help guide you through the learning process.
3. Practice, practice, practice: Consistent practice is essential when learning to code. Set aside time every day to work on coding projects and practice.
4. Join a community: Joining a coding community can help you get feedback on your code, ask questions, and get support from others who are also learning.
5. Learn from your mistakes: Coding is all about trial and error. Don't get discouraged if you make mistakes. Learn from them and keep moving forward.
6. Use online resources: There are countless free online resources available to help you learn to code, such as Codecademy, FreeCodeCamp, and Udemy.
7. Build projects: Building your own projects can help you apply what you've learned to real-world problems.
hi @jacob3, just wanted to make sure with your suggestion. Do you add those additional text in the system role, or it is included somewhere in user or assistant?
FWIW… I’m moving to a model where I just have gpt-3.5-turbo return everything as JSON. It still adds pleasantries around the JSON lol but at least I can find the task data I’m looking for in the response and parse it out… Some of the prompt techniques above seem like they might minimize the chat style responses from the model but I doubt they’ll completely eliminate it… I’ve also found that avoiding the system prompt all together and just feeding your entire prompt in as an initial user message works well and helps ensure that the model always responds with the JSON format you’ve asked for.
My advice is after 2 days of playing with gpt-3.5-turbo so take it with a grain of salt…
After a weekend of playing with gpt-3.5-turbo I feel like text-davinci-003 is way better at following instructions. If you need to parse the responses from GPT or don’t want all the chit-chat then my current recommendation would be to stick with text-davinci-003 which is a shame because gpt-3.5 seems so much more creative and clever. Trust me I want to figure out how to ONLY use gpt-3.5, it’s not only better its cheaper too, but I’m just not there yet… I’ve been able to convert some of my prompts to gpt-3.5 in my current project but not others… I’m working on it though… The always respond with JSON technique and avoiding the “system” prompt all together seems promising…
The biggest breakthrough I’ve had so far is just ignoring the system prompt and passing all my instructions in via an initial “user” message… This is proving to be pretty reliable so far… You still need to watch how much conversation history you pass in though because that will bias things away from your initial prompt.