Poor quality response on trained LLM with pdf files

you know you can just send people a DM, right? :laughing:

Oops, turns out you can’t - you need to engage with the community a little more to reach trust level 1 :thinking:

I’m just too dumb to meet my goals on my own.

I’m sure that’s not true.

In the beginning, many topics seem impossible – later, you can’t believe how easy they are and you wonder why everyone doesn’t understand them.

I recall learning to drive a manual (stick shift) car many years back. I swore I’d never be able to learn it, that it was just too difficult and my brain/hand coordination just didn’t work that way. Years later I got into a stick-shift car, started driving, and didn’t even realize it was a manual. My brain didn’t even register the difference because it was no different than an automatic after I had become so familiar.

In short – please give yourself more credit. I believe in you.

I should also mention that I’ve been fortunate in my life’s work, to the point I have more money than I need to be comfortable – and I do not take paying work.

However – If you are willing to put the effort in to learn, I am willing to help you get through this.

Let’s start with your prompt. Can you take the examples I’ve shown you and try to convert your prompt to match?

I’ll give you an example of how you can get started. Please don’t use this; it’s only for purposes of helping communicate with you and get you going:


# Task: Use the attached data to select 5 unique exam questions and create an example narrative for the questions to make them more relatable.

## Rules:
- Write 5 unique scenario-based questions that test knowledge application.
- IMPORTANT: Each question can only have one answer, either A, B, C, D.
- IMPORTANT: No question can have multiple answers, or "None of the Above" or "All of the Above", or any similar multi-answer.
- Include explanations for both correct and incorrect options.
- Provide a citation (up to 10 words) from the context.
- Include the related chapter and/or section number.
- Format each question set as a single line of text, separated by "|", for example: Question | Option A | Option B | Option C | Option D | Answer | Explanation | Citation | Chapter

## Style:
- Base questions on realistic scenarios.
- Write clear and concise questions and options.
- Provide thorough explanations that address both correct and incorrect choices.

## Examples:
- (this is not a valid example; however I'm using what you have provided) -- What can be the colors of an apple? | Red | Blue | Green | Yellow | AC | explanation | Citation | Chapter 2.5.1
- (continue examples)

 

Will this help you get started?   :slight_smile:

1 Like

Your passion in willing to help me through this makes me feel bad for wanting to take a shortcut.

I have a feeling that either my set up is broken somewhere or the text file is badly formatted, cos when I ask simple questions like “How many chapters are there in the context, and list them”, the gpt gave wrong answers

If you would drop me an email I’m willing to send you my python script, the study guide txt etc

@DevGirl I’ve managed to get good results using your recommended prompt layout. This time I skipped using those python scripts and simply using OpenAPI’s playground and using parts of the study guide in the system. Thanks!

1 Like

@larissapecis @Diet Thank you guys for your help too so far!

You’re welcome.

By the way, the prompt is the most important piece of code you will develop in order to get what you want from a conversational LLM. GPT-4 is execellent in building prompts.

Try to explain what you have, give it fairly structured example of the outcome you are expecting and the type of data available. Certainly, it will give you helpful tips.

I’ve managed to get good results using your recommended prompt layout.

I am very happy to hear about your progress.

Based on your reply, I think we’ve stumbled on another issue. You mentioned that you had the original prompt in Python. This tells me that you may not have spent a lot of time iterating different attempts/variations.

If you don’t get the answer you expect in the first 10 instances of prompts, that’s not anything to get discouraged about. Remember that LLM’s (and generative ML) do not work on precise, deterministic logic. Also remember the formula that I provided, showing how they become more and more difficult with certain added components.

For that reason, it’s important to feel okay about iterating different prompts, sometimes for hours, in situations where you use embedded data and very strict rules that expect a more deterministic output.

I also suspect that my other advice may not have helped because you were not at a point where it made sense (yet). Therefore, I’d like to make one final suggestion:

Take each of my posts above and save them. As you get further along, re-read them again. See if they eventually make more sense. Because as you reach the point where they do make more sense, they will help save you a lot of time learning :slight_smile:

Also remember that LLM’s are not good with simple computational/logic (boolean) functions because they don’t operate in this manner. Therefore, when you ask how many words are in a text, etc – it’s perfectly okay if you get a wrong answer.

This is the reason many of us blend code with LLM’s, rather than depending on LLM’s alone. In simple terms, LLM’s are a language interface/middleware; good for semantic interpretation but not good at what programming languages are intended to accomplish.

Regarding your email request – I will be happy to email you and take a look at your script and give you additional ideas (again - no charge; I’m only interested in helping).

2 Likes

Hi DevGirl, I have had decent success with using the Playground with a prompt that would instantly get the GPT to start producing the content I need. However, I have a few problems:

  1. The maximum tokens is only 4096, so I cannot just make one prompt for 100 sets of quizzes, walk away, and come back at a later time to process the output. It can only take about 15 sets of quizzes per response, which means I have to remain seated for about 1-2 minutes for the response to complete, before entering “Continue” to request GPT to continue making more quizzes. This is VERY time consuming for me. Is there a neat way to automate this? Such as a python script that makes the prompts, check the responses, and save the output to files instead of having to copy/paste from the webpage (Playground). The python script setup I previously had did not allow me to iterate at all. The GPT on my previous set up could not continue with conversations, like every prompt was an individual one with no relation to the previous one.

  2. Sometimes, if I didn’t notice a problem with GPT’s response, telling it to “Continue” will mean he will repeat those problems in subsequent quizzes. I notice that the more times I asked it to continue, at times adding some feedback to it to finetune its response, the GPT would seem to start forgeting some of the original rules and constraints, which will lead to an evil cycle of having to remind GPT of the constraints, but which would cause it to forget some of the originals.

  3. I’m quite surprised by how the “turbo” version of the latest GPT4 could be so forgetful, or silly, such as mistaking Section 3.5.1.6 for 3.15.6, or erroneously indicating answer as “ACE” instead of “ACD” (there’s no option E?). Is there something I had done wrongly?