How to get longer answers?

Still working on getting something usable from questions asked using a provided section of text.

I’m trying to get longer answers from the AI. I have a lot of extra info in the provided text, and sometimes I’ll get “Yes, you can do that because blah blah blah. The philosophy is that blah blah.” and other times I’ll get “Yes, you can do that.”

Even by telling it to give extended or complex answers in the prompt, that doesn’t seem to make much of a difference.

Is there a way to do this at all?

2 Likes

4 options I am aware of.

A. Instruct GPT3 (DAVINCI 2) to give multi -part answers and it will provide a more comprehensive response.

B. Provide a couple of examples of questions with comprehensive answers at the end of your prompt (few shot).

C. Experiment with multiple prompt designs until you find the words/prompt design that “triggers” the AI to generate longer answers.
You could try “lets explain this step by step” at the end of your prompt to lengthen the response.

C. Engineer your prompt to create a scenario where a long answer will be generated to complete the pattern for the scenario you create:

Here is an extreme example scenario where the answers should only really be 1 word long, but are forced to be longer by the scenario created using the prompt (GPT3 output in bold):

Prompt:

"The long answer world championship “write a long answer 2022” was held last week in Taumatawhakatangihangakoauauotamateaturipukakapiki-maungahoronukupokaiwhenuakitnatahu, New Zealand.

The competition brings together all of the top long answer writers from around the globe who compete against each other to write the longest possible answer (ridiculously long) to a short question.

The winning long answers this year were exceptionally long compared to last years long answers.
Here are the longest answers of 2022:

Question: How many dwarfs were in the movie "snow white and the seven dwarfs?
Exceptionally long answer:

There were originally eight dwarfs in the movie “snow white and the seven dwarves”, but one of them, Dopey, was cut from the final film. So, in total, there were seven dwarfs in the movie.

Question: Which one is the tallest, a mushroom or a giraffe?
Exceptionally long answer:

A giraffe is taller than a mushroom. A typical giraffe is about 16 to 20 feet tall, while a mushroom is only a few inches tall.

Question: Which is the shortest, a human toe or a boeing 737?
Exceptionally long answer:

A human toe is shorter than a boeing 737. A typical human toe is only a few inches long, while a boeing 737 is about 150 feet long."

End of prompt.

The key takeaway is that creative experimentation and imagination is required to get what you want from GPT3.
Don’t be afraid to try and fail 100 times, experiment 101 might be a success.

3 Likes

I’m having similar issues. Even specifying a length is completely ignored. I can say “answer in 100 words” or “4 sentences” or “2 paragraphs”, and GPT3 acts like that specification isn’t even there.

Longer, more specific prompts that require it to provide more information are fairly effective, but I often still have to regenerate the answer 2-3 times before it gives one that will work. Unless someone has some information otherwise, I would consider this a bug - the system consistently ignores an instruction that it’s given.

1 Like

Thanks for the tips. I’ll keep experimenting with it. Like @CreateMore said, it does feel like a “bug” that it ignores specific instructions about length, but follows other instructions in other instances. I’ve run into this a few times with other instructions in the prompts where it just flat out ignores them.

But I’ll give your suggestions a try, thanks a lot.

1 Like

I think AI “understands” number of paragraphs more clearly than number of words.

Worth experimenting with that approach for sure, has worked well for me.

Good luck

3 Likes
1 Like

Good day, @jeffinbournemouth thank you for your tips and explanations. I have an issue with examples, that then my prompt takes almost all tokens, and none is left for a response. My prompt is fed with JSON formatted data related to the prompt. I’m using text-davinci-003.

Also sometimes translation instructions are partially ignored. Part of the response is in the required language and part is in the original language. But the text is altered, which means AI has done its job.
Part of the prompt:

Product data is provided in the Polish language. Translate the description to the Lithuanian language.

And response: `

Romet Jolene 24 to wysokiej jakości rower przeznaczony dla dzieci w wieku od 9 do 12 lat. Jego atrakcyjna kolorystyka i stylistyka z pewnością przypadną do gustu każdej młodej rowerzystce. Model posiada lekką, aluminiową ramę, specjalnie wyprofilowaną by ułatwić wsiadanie i zsiadanie z roweru. Rower ma także:


  • amortyzator przedni

  • przerzutki Shimano

  • mocne hamulce v-brake.


Jolene 24 zapewni Twojemu dziecku bezpieczeństwo i komfortowe wrażenia podczas jazdy. Wyprodukowany jest on z myślą o osobach o wzroście 135 - 150 cm.

Ends Polish part of response

Romet Jolene 24 – tai aukštos kokybės dviratis sukurtas vaikams nuo 9 iki 12 metų am\u017eiaus. Jo patrauklus spalv\u0173 derinys ir stilius tikrai patiks kiekvienai jaunajai dviračio m\u0117g\u0117jai. \u017dymimasis turi lengv\u0105 aliuminin\u012f r\u0105m\u0105, specialiai pakoreguotus, kad b\u016bt\u0173 patogus vaiko \u0100seistas ir nusileidimas nuo dviračio. Jam taipogi yra \r\n

  • priekinio amortizatoriaus
  • Shimano virsmas
  • ir stiprios V-brake stabd\u017eys.

.

Tod\u0117l Romet Jolene 24 suteks vaikui saugumo ir malonumo jaustis vairavimo metu. Jis pagamintas tam, kad atitiktumet tiems asmenims, kuriems ilgis nuo 135 iki 150 centimetr\.

**Ends Lithuanian part of response**`
1 Like

Hi Johnny,

Not sure exactly how you are structuring your prompt for this task.

I will be happy to jump on a quick screen share call if you want to resolve the issue.

We will be able to engineer a solution in just a few minutes.

Your project may actually be better suited to Cohere’s new multi language model which utilizes a vector db to translate better than all other LLM’s…

1 Like

Hi Johnny,

I also need help with getting JSON formatted data from GPT3.5. Instead of long data with paragraphs and bullet points separated with either \n or \n\n I want that data to be converted to JSON format. Please help me out. Thank you!!

1 Like

Here is an example of an API call through Bubble that produces a JSON response. Note that you will need to parse the data once you get it back as it will be in string form. In this example I am returning an array of objects each of which will have the same JSON format. Thought this might help.

{
“model”: “gpt-3.5-turbo”,
“max_tokens”: 3000,
“temperature”: 1,
“messages”: [
{
“role”: “user”, “content”:
"I want to generate research into a potential market opportunity. My focus is on the following situation - , and the following types of user - . Generate 6 different personas that fit this brief. Ensure that each persona is diverse both demographically and in terms of behaviour and needs. Organise your results by persona, including the following key information for each persona. Give them a short title, some identifying characteristcs, a summary of their jobs to be done, a description of their most important problems and associated pain points, a summary of the underlying causes for those problems, and a summary of how they meet their needs now. "
}
],
“functions”: [
{
“name”: “generate_potential_users”,
“description”: “Generate personas to help understand problems to be solved”,
“parameters”: {
“type”: “object”,
“properties”: {
“results”: {
“type”: “array”,
“items”: {
“type”: “object”,
“properties”:{
“title”: { “type”: “string” },
“characteristics”: { “type”: “string” },
“jobs”: { “type”: “string” },
“problems”: { “type”: “string” },
“causes”: { “type”: “string” },
“options”: { “type”: “string” }
},
“required”: [“title”, “characteristics”, “jobs”, “problems”, “causes”, “options”],
“additionalProperties”: false
}
}
},
“required”: [“results”]
}
}
]
}

2 Likes