Need Help With Prompts? Ask me*

Yes that’s exactly how it works

1 Like

Josh, qual seria o melhor treinamento de ajuste fino e entrada via prompt para obter respostas reais em faq pessoal? Exemplo, saber responder minha idade e wuestoes pessoais?
Parece que voce esta bem avançado nisso. Podes ajudar?

This prompt added with more and more examples but AI still stubborn. What did I do wrong?

Evaluate the root cause in the completion field for a notebook computer bug as "NG", "Poor", "Average", "OK", or "Good".
Example:{"prompt":"BV-driver cannot be uninstalled","completion":" VNP on 0.5.22 BIOS"}
Output:("Poor","""Solution given "BIOS 0.5.22" but no description.""")
Example:{"prompt":"USB device on GC Board will lost while in use.","completion":" fixed on TBT FW rev 10"}
Output:("Poor","""No details in "TBT FW rev 10". """)
Example:{"prompt":"SUT show BSOD with stop code \"KMODE_EXCEPTION_NOT_HANDLED\"(DPPM: 600)","completion":" Fixed on QS sample"}
Output:("Poor","""Indicated "Fixed on QS sample" but no details.""")
Example:{"prompt":"CapsLock's LED always keep on when put  SUT into MS","completion":" Verify DVT1 + BIOS v0.2.6 + Driver v06, issue cannot duplicated."}
Output:("Poor","""Only "Verify DVT1 + BIOS v0.2.6 + Driver v06" does not describe any reason.""")

{"prompt":"[ATS P1] SP13_LR_S4:SUT hang Dell logo w\/ circle (B1InitializeLibrary failed 0xc0000185) (DPPM:160)(VP 20H1 LR DDPM:100) -->","completion":" Fixed by BIOS updated."}

AI:("Good","""Solution given "BIOS updated" and issue was fixed.""")
Human:  I give you all the show and tell but you're still stubborn!

Sorry i had to translate from Portuguese. If you can communicate in english i can help you here, or email me at

Hi i would not use fine tuning for this

few shot prompts on text davinci 3 can be quite intuitive when written correctly

i’d be happy to consult for you on this, email me or ask here

Josh, what would be the best fine-tuning and prompt input training to get real answers in personal faq? Example, know how to answer my age and personal wuestoes? Looks like you’re pretty advanced on this. Can you help?

sure feel free to give examples here or email me for more detailed help

I wanted to build a Q&A ChatBot. I took OpenAI’s advice and made embeddings of my documents, and also followed their Q&A using embeddings tutorial ( ) exact steps.
Unfortunately the header of the prompt which is:
“”“Answer the question as truthfully as possible using the provided context, and if the answer is not contained within the text below, say “Sorry, I don’t know”
is not working as expected, because I receive a lot of “Sorry I don’t know” answers, although it finds the good document.
Can you please give me some suggestions?

not sure. Perhaps you don’t have enough examples

Sorry i only work with few shot prompts and fine tuning, never played with embeddings

I assume you are actually putting the text you found into the prompt. Your post doesn’t make it clear

If you are, can you give us an example of one prompt that comes back with the sorry I don’t know response

Hi Josh, do you have tips on how best to include tones in prompts? (If I want to generate text in 3 different tones all together, as an example.) In my experience, multiple tones generate worse outputs than a single tone…thanks in advance.

can you give me an example of what you mean by tone?

short version, likely need to have 3 separate tones in 3 separate prompts to make it work best

Hi Josh.
I’m trying to program a little game with ChatGPT.
Here’s my prompt. It kind of works but after the first proposition he makes, sometimes he starts forgetting that he is not allowed to use more than 2 words and uses 3 or more. Also, sometimes instead of sayong “red fruit” for example, he will say:
“Adjective: Red
Noun : Fruit”
And also, generally, he’s not very creative in his propositions.
Any idea on how to fix that? :slight_smile:

Let’s play a game. Here are the rules.

  • You have to make me guess a word. You are allowed to say only 2 words, one noun and one adjective, to describe the word I have to guess. For example: “red fruit” to make me guess “strawberry”, or “air vehicle” to make me guess “helicopter”.
  • The noun and adjective you use cannot share the same etymology as the word you want me to guess.
  • If I don’t give the right answer, you will say another combination of 2 words, one noun and one adjective, provided that they are different from the previous ones, but you still have to make me guess the same word as before.
  • If I say “level 2” then from now on you will use harder words to guess (less common words).
  • If I say “no animals” then from now on you will not be allowed to make me guess animals.

You can’t let the prompt grow in size you have to restart every time with every new question

Then it’s statistically probable that it would usually follow your directions

Otherwise a transformer is not made to do this unless you make it some level of sentient

This is completely and I mean completely new to me where do I start to get the foundation right so I can me forward but not in ignorance.
Thanks for offering your help to all. Jason

Hi Josh, I have to submit many prompts in order to obtain paragraphs that will form a document. However, there is no way to have a history of the responses so I have to pass the previous context to the prompt every time. Also, all prompts must be specific, since we are talking about obtaining a marketing document. Do you have any tips on how to generate the prompts? Both to have consistent answers and not to exceed the token limit. Thanks in advance!

i would need to see closer what you are doing to say for sure. In general i would say use 3.5 or 4 it is smarter and might have the fine tuning to skew the completions in the direction you need

I try to explain myself in the best possible way in a concise way: through the prompt I ask the AI to provide me with an answer (which will be used as a paragraph in a document that I will create at the end) based on the context and information I give in the prompt.
A simple example: knowing that company X is positioned in sector X, tell me what the possible competitors (or possible customers) are.
Or for the next prompt I might need to refer to the response generated in the example. By doing so, however, I risk running out of tokens and getting cut answers.
So I’d like to get some advice on how to reduce tokens. Any suggestions on how to get the most consistent and correct answers would be great too

are you referring to the GPT-3.5 and GPT-4 model or the temperature?

anyway thank you for the quick answer and for the tips!

again i would need to see exactly what you are doing, private msg me here if you can - or search me out

3.5 or 4 i am referring to GPT model version yes - never have temperature at 3.5 or 4 :slight_smile:

There’s limited options if you want to preserve history and minimize tokens:

  1. Drop the oldest messages
  2. Use GPT to summarize old messages
  3. Use embeddings/vectors to only include relevant parts of history
1 Like