GPT4 gives same generic answers with just technical words substituted for all topics I give. Even when prompts are techicllally detailed

GPT4 gives same generic answers with just technical words substituted for all topics I give. Even when prompts are techicllally detailed.

Do you have an example prompt we can look at to help you diagnose issues you might be having?

Generate the GPT4 large language model prompt that will first generate the section headings for a paper discussing the given title and abstract, then write the extremely specific points that will be covered in each section, and then generate each section of the paper based on the details for each section its had produced itself.

Might try something like this to help it out…

PROMPT:

  1. Please generate an outline with comprehensive section headings for a research paper based on the given title and abstract.

Title: How to Prompt Better After Hours
Abstract: Paul Bellow has some late night ramblings with GPT-4 in order to help a forum user, an informal study of prompt engineering for good.

  1. After generating the section headings, create a list of highly specific subtopics to be covered within each section.

  2. Finally, elaborate on each subtopic by producing well-structured and detailed content for every section, ensuring a coherent and technical flow throughout the paper.

I would probably break it into three prompts, but keep it in a single API or ChatGPT “chain”…

Hope this helps!

1 Like

Thank you so much. I am trying various prompt types for this issue I am facing. Esp using the combination of Persona and Reverse prompt engineering. Will surely try your solution. But with same prompts I am getting more satisfactory (very specific and technical variety) with GPT3.5 turbo.

1 Like

Don’t try doing this all at once. (A) you need to refine your outline. (B) You will exceed the context-window and lose your initial prompt. So:

  1. Start with a general actor prompt (“Act as an expert in …”). Include your Title and Abstract. Then tell ChatGPT to confirm. Give it your first request in the next prompt so you can easily edit and iterate.
  2. As your first request, ask for an outline in bullet points. Request to take the field into account, general ideas, and current controversies. Don’t be satisfied with the bland 5-paragraph essay format. Iterate this until it is really good. Then copy the final bullet points …
  3. … and replace the previous prompt. (You don’t want the entire discussion of the outline to clog up the prompt window.) The ask it to write the first bullet point of the outline. Again, iterate. When you are happy with your first section, copy it somewhere locally and ask ChatGPT to summarize…
  4. … Then you go back to the second prompt, replace the first bullet point with the summary, and ask it to write out the second bullet point.

Through this, you compose your text writing in chunks that should be small enough to keep the structure of your text in the context-window. This is essential to keep the contents coherent. You can check the current number of tokens here. ChatGPT-3.5 has a context window of about 2,000 tokens, ChatGPT-4 has about 3,500.

That’s how it’s done.

:slight_smile:

3 Likes

Thank you so much. I had tried this more or less along the lines what u said…still the outputs were generic. I will try this again…surely…GPT3.5 turbo gets in the depth or VERTICAL details of the topic without so much steering, compared to GPT 4, in my cases atleast.

You can be much more specific when you ask for a specific output format such as json (although json is pretty expensive - you can do even better).

System prompt:
Analyze scientific study, create summary and display as following json:
_1_sentence_each = _1se, _in_1_sentence = _1s, _in_2_sentence = _2s, _from_1_to_100 = _1_100

{"introduction": {"overview_of_topic_1s": "","research_question_1s": "","hypothesis_2s": "",
"significance_of_study_1_100": 1,"research_objectives_1se": ["",""]},"literature_review": {"theories_1se": ["",""],
"methods_1se": ["",""],"findings_1se": ["",""]},"discussion": {"interpretation_of_results_1se": ["",""],
"limitations_1se": ["",""],"future_research_directions_1se": ["",""]},"conclusion": {"main_findings_1se": ["",""],
"implications_1se": ["",""]},"references": ["",""]}

===

User prompt:

the scientific paper content:

===

Of course this also works the other way around.

system prompt:

Create a scientific paper for the topic given by the user and present it in the following format (don’t write resolutions and introduction, just the json):

_1_sentence_each = _1se, _in_1_sentence = _1s, _in_2_sentence = _2s, _from_1_to_100 = _1_100

{"introduction": {"overview_of_topic_1s": "","research_question_1s": "","hypothesis_2s": "",
"significance_of_study_1_100": 1,"research_objectives_1se": ["",""]},"literature_review": {"theories_1se": ["",""],
"methods_1se": ["",""],"findings_1se": ["",""]},"discussion": {"interpretation_of_results_1se": ["",""],
"limitations_1se": ["",""],"future_research_directions_1se": ["",""]},"conclusion": {"main_findings_1se": ["",""],
"implications_1se": ["",""]},"references": ["",""]}

user prompt:

How to prompt better after hours

===

This way you can do a json schema check and write a script that checks if you got everything you need.

And from that you can go deeper and prompt something for each category of the json.

For first category

user prompt:

Write 1000 words for Introduction of the following research paper (no resolution, there will be more text):

[include json here]

And for the next category first create a summary of the 1000 words you just got.

user prompt:

create a summary of 150 words fromt this introduction:

[include the introduction here]

And for next category you send the json + the summary of last category… and ask for the next…
This should help to keep your context.

okay! thats an interesting suggestion. I have not tried anything like this before. I will try this. Thank you so much! :heart_eyes:

1 Like