Using prompt elements as context with Chat-gpt 4?

Hi, I’m developing a system that processes medical journal abstracts, classifying them and extracting information. I feed Chat-gpt 4 a single abstract and then ask a series of ~40 questions, for example: “Does this study concern the use of norepinephrine as a vasopressor?” and similar questions regarding other drugs. I would like to be able to ask questions like “Does this study concern the use of any vasopressors other than the ones I asked about in the previous questions?” This does not work, and it generally seems that any attempt to use the prompt itself as context does not work for me.

Can anyone suggest a better way to approach this problem?

Hi and welcome to the Developer Forum!

How are you tracking the past interactions, bearing in mind the API is stateless, so any past prompts need either be fed back to the model or forgotten?

1 Like

Hi! I send a system prompt and then a user prompt with a medical abstract followed by a series of numbered questions like:

  1. Is this a clinical study concerning subarachnoid hemorrhage patients?
  2. Is this a prospective clinical study concerning subarachnoid hemorrhage patients?

One thing I would like to do is include statements like following question (1) with: “If the answer to question (1) was ‘No’ then skip ahead to question 15”, but using the prompt itself as context does not seem to work. Any suggestions on how to approach a problem like this will be greatly appreciated.

Ok, you can’t put everything in one prompt, the same way as if a person came up to you and just started speaking 100 pages of a detailed technical manual without taking a pause and then asked you to do detailed work involving what was just spoken, you’d throw your hands up and cry “Wait a moment, please”

The GPT LLMs use large neural network and they act very much like the speech centres of our own minds. So you give some context and you ask a question. That’s it, you ask one question and then you wait for the reply, then if you have any follow-up to that question you send all of the context and questions and replies up to that point back to the model with the new question appended to the end, and you continue like that. While doing this you need to maintain the token limits to within the maximum the model can deal with.

2 Likes

Thanks for the feedback. I have gone back and forth between sending an abstract with a single list of questions, vs sending the abstract multiple times with one or a few questions each times. There are advantages and disadvantages’ to either approach. I’ll keep working along these lines. Thanks again.

1 Like

Fair point, but this is actually possible, I’ve done somewhat the same with chemistry abstracts and found that I could do anywhere up to 50 questions, you’ll need a prompt that looks something like this:

Answer the interview questions delimited '#' based on the context delimited '/'

///
[Insert abstract here]
///

###
[Insert list of interview questions here]
###

If you want to skip question you just have to add if yes or if no in front of the questions to denote which questions should be skipped and why.

It’s worth to note that the models have the same amount of attention available no matter how many questions you use, so more questions will lead to shorter and simpler answers

No problem, you can ask lots of questions at once, you just get a lower quality reply each time you add one, you mentioned medical, so I’m assuming quality is paramount.

1 Like

I’ve tried saying "If the answer is ‘no’ skip to question 15, and that didn’t work, but I’ll try again with your wording. Thanks!

1 Like

I started out with asking the LLM to categorize the study type and the subject matter using a simple checklist, and for that yes/no answers are perfect. In fact I ask for that in the system prompt. Given the amazing power of these models I’m now trying to expand so that the LLM itself can suggest new categories to add to the search and expand the range of research of interest. This raises new problems!

1 Like

Always happy to help :laughing:

Playing around with wording can help a lot, the various versions of GPT are “unidirectional” meaning they can only apply attention in one direction.

Unidirectional attention in language models is like reading a book from start to finish. You gather context and meaning from the pages you’ve read, but you can’t look ahead or revisit previous pages to adjust your understanding.

If you need something that can dig deeper into the papers you my want to have a look at this post

1 Like

I got it to work!! I just had to be very clear that I was talking about the previous prompt questions and not the abstract. then the LLM replied perfectly:

User: 28) Please list the names all of the drugs and types of drugs mentioned in all of the previous prompt questions, whether or not they are mentioned in the abstract.

LLM: 28) Vasopressors, epinephrine, norepinephrine, vasopressin, phenylephrine, angiotensin II, inotropes, dobutamine, milrinone, digoxin, vasodilators, nimodipine.

2 Likes

[quote=“timothy.howells, post:1, topic:389099”]
and extracting information. I feed Chat-gpt 4 a single abstract and then ask a series of ~40 questions, for example: “Does this study concern the use of norepinephrine as a vasopressor?” and similar questions regarding other drugs. I would like to be able to ask questions like “Does this study concern the use of any vasopressors other than the ones I asked about in the previous questions?” This does not work, and it generally seems that any attempt to use the prompt itself as context does not work for me.

Can anyone suggest a better way to approach this problem?
[/quote]jklnkkm;m

emphasized text