you know you can just send people a DM, right?
Oops, turns out you canāt - you need to engage with the community a little more to reach trust level 1
you know you can just send people a DM, right?
Oops, turns out you canāt - you need to engage with the community a little more to reach trust level 1
Iām just too dumb to meet my goals on my own.
Iām sure thatās not true.
In the beginning, many topics seem impossible ā later, you canāt believe how easy they are and you wonder why everyone doesnāt understand them.
I recall learning to drive a manual (stick shift) car many years back. I swore Iād never be able to learn it, that it was just too difficult and my brain/hand coordination just didnāt work that way. Years later I got into a stick-shift car, started driving, and didnāt even realize it was a manual. My brain didnāt even register the difference because it was no different than an automatic after I had become so familiar.
In short ā please give yourself more credit. I believe in you.
I should also mention that Iāve been fortunate in my lifeās work, to the point I have more money than I need to be comfortable ā and I do not take paying work.
However ā If you are willing to put the effort in to learn, I am willing to help you get through this.
Letās start with your prompt. Can you take the examples Iāve shown you and try to convert your prompt to match?
Iāll give you an example of how you can get started. Please donāt use this; itās only for purposes of helping communicate with you and get you going:
# Task: Use the attached data to select 5 unique exam questions and create an example narrative for the questions to make them more relatable.
## Rules:
- Write 5 unique scenario-based questions that test knowledge application.
- IMPORTANT: Each question can only have one answer, either A, B, C, D.
- IMPORTANT: No question can have multiple answers, or "None of the Above" or "All of the Above", or any similar multi-answer.
- Include explanations for both correct and incorrect options.
- Provide a citation (up to 10 words) from the context.
- Include the related chapter and/or section number.
- Format each question set as a single line of text, separated by "|", for example: Question | Option A | Option B | Option C | Option D | Answer | Explanation | Citation | Chapter
## Style:
- Base questions on realistic scenarios.
- Write clear and concise questions and options.
- Provide thorough explanations that address both correct and incorrect choices.
## Examples:
- (this is not a valid example; however I'm using what you have provided) -- What can be the colors of an apple? | Red | Blue | Green | Yellow | AC | explanation | Citation | Chapter 2.5.1
- (continue examples)
Your passion in willing to help me through this makes me feel bad for wanting to take a shortcut.
I have a feeling that either my set up is broken somewhere or the text file is badly formatted, cos when I ask simple questions like āHow many chapters are there in the context, and list themā, the gpt gave wrong answers
If you would drop me an email Iām willing to send you my python script, the study guide txt etc
@DevGirl Iāve managed to get good results using your recommended prompt layout. This time I skipped using those python scripts and simply using OpenAPIās playground and using parts of the study guide in the system. Thanks!
@larissapecis @Diet Thank you guys for your help too so far!
Youāre welcome.
By the way, the prompt is the most important piece of code you will develop in order to get what you want from a conversational LLM. GPT-4 is execellent in building prompts.
Try to explain what you have, give it fairly structured example of the outcome you are expecting and the type of data available. Certainly, it will give you helpful tips.
Iāve managed to get good results using your recommended prompt layout.
I am very happy to hear about your progress.
Based on your reply, I think weāve stumbled on another issue. You mentioned that you had the original prompt in Python. This tells me that you may not have spent a lot of time iterating different attempts/variations.
If you donāt get the answer you expect in the first 10 instances of prompts, thatās not anything to get discouraged about. Remember that LLMās (and generative ML) do not work on precise, deterministic logic. Also remember the formula that I provided, showing how they become more and more difficult with certain added components.
For that reason, itās important to feel okay about iterating different prompts, sometimes for hours, in situations where you use embedded data and very strict rules that expect a more deterministic output.
I also suspect that my other advice may not have helped because you were not at a point where it made sense (yet). Therefore, Iād like to make one final suggestion:
Take each of my posts above and save them. As you get further along, re-read them again. See if they eventually make more sense. Because as you reach the point where they do make more sense, they will help save you a lot of time learning
Also remember that LLMās are not good with simple computational/logic (boolean) functions because they donāt operate in this manner. Therefore, when you ask how many words are in a text, etc ā itās perfectly okay if you get a wrong answer.
This is the reason many of us blend code with LLMās, rather than depending on LLMās alone. In simple terms, LLMās are a language interface/middleware; good for semantic interpretation but not good at what programming languages are intended to accomplish.
Regarding your email request ā I will be happy to email you and take a look at your script and give you additional ideas (again - no charge; Iām only interested in helping).
Hi DevGirl, I have had decent success with using the Playground with a prompt that would instantly get the GPT to start producing the content I need. However, I have a few problems:
The maximum tokens is only 4096, so I cannot just make one prompt for 100 sets of quizzes, walk away, and come back at a later time to process the output. It can only take about 15 sets of quizzes per response, which means I have to remain seated for about 1-2 minutes for the response to complete, before entering āContinueā to request GPT to continue making more quizzes. This is VERY time consuming for me. Is there a neat way to automate this? Such as a python script that makes the prompts, check the responses, and save the output to files instead of having to copy/paste from the webpage (Playground). The python script setup I previously had did not allow me to iterate at all. The GPT on my previous set up could not continue with conversations, like every prompt was an individual one with no relation to the previous one.
Sometimes, if I didnāt notice a problem with GPTās response, telling it to āContinueā will mean he will repeat those problems in subsequent quizzes. I notice that the more times I asked it to continue, at times adding some feedback to it to finetune its response, the GPT would seem to start forgeting some of the original rules and constraints, which will lead to an evil cycle of having to remind GPT of the constraints, but which would cause it to forget some of the originals.
Iām quite surprised by how the āturboā version of the latest GPT4 could be so forgetful, or silly, such as mistaking Section 3.5.1.6 for 3.15.6, or erroneously indicating answer as āACEā instead of āACDā (thereās no option E?). Is there something I had done wrongly?
Hi, I have been facing the same issue recently with regards to optimizing my documents for use in LLMs.
Following your ideas, I have come to a road block;
Are these tools pointers as to how to format/modify the original file? Where to go next?
Thank you