Hi Josh. I am trying to correct some sentences into simple English using the command"correct to simple English". I use 0 temperature and top_p 1 to have no variation. It works well with each sentence exactly how I want it. But when I try to pass all sentences together, to save time sending multiple API requests, it changes some of the sentences to an undesirable effect, taking into account the previous sentences I reckon. Is there a way to batch process each sentence without starting a prompt for each sentence?
No not really
It will always take into account the previous tokens
But at the same time doing one request after another from a API cost to my knowledge should be cheaper
Or at least not more expensive
It might cost in terms of your computing cycles and in terms of networking slow down
Thanks. What about forking the process at my end and sending several prompts at once? Would this violate any rules? Secondly, in the documentation you can apparently send via Linux an array of strings, but I guess that woud not change anything I reckon?
What would be a good prompt for solving a coding problem? I wanted to try and fine-tune it based on Stackoverflow questions and answers, but I am not sure if that will be good as stackoverflow data is very lengthy, and it looks like it will use a lot of tokens above my budget
I want it to output answers just how chatGPT solves coding problems, spits out well-commented code with a short explanation after it. I have experimented a lot but it doesn’t work. I’ve tried codex but it can’t write the explanation.
Thank you.
as long as that is not one prompt it will work the way you want it to
Why not use chatgpt?
i want to do it programmatically
Have you tried text DaVinci 3?
Hello Josh, I could find a way to prompt batchwise via API. The playground answered I could insert up to 10 prompts per batch. I assume it should be done with json object. Can you give me a hint?
i would not trust what the prompt says, it makes things up, until you build a bubbling cascade informational system (ie self-awareness) to protect against this
yes i can pass you of to my programmer who knows how to do this, he does not speak english well but he is proficient in openAI api
Or go on to upwork and find one who knows openAI api, bid high, some say they do and they really do not
My guess is that GPT3 (i.e. text-davinci-003) will forget anything that appears more than ~4096 tokens ago.
For example, if I give GPT3 a movie script that is too long, and ask GPT3 to write the next line of dialog, it will forget that the very beginning of the script introduced Ferdinand as having lost his arms and legs in World War 1, and GPT3 might suggest “Ferdinand walks to the doorway, and grabs Pedro by the collar”.
I am hoping that someone will say I am wrong about this. I am often wrong, so maybe there is hope! ![]()
Yeah, this is how it works with the low 4096 token context window… What many do is summarize the scene or what you’re writing as part of the prompt to give it more information on what to write and not hallucinate new characters, etc. Sometimes lowering or raising the temperature can help too. Hope this helps… and welcome to the community!
Yes that’s exactly how it works
Josh, qual seria o melhor treinamento de ajuste fino e entrada via prompt para obter respostas reais em faq pessoal? Exemplo, saber responder minha idade e wuestoes pessoais?
Parece que voce esta bem avançado nisso. Podes ajudar?
This prompt added with more and more examples but AI still stubborn. What did I do wrong?
Evaluate the root cause in the completion field for a notebook computer bug as "NG", "Poor", "Average", "OK", or "Good".
Example:{"prompt":"BV-driver cannot be uninstalled","completion":" VNP on 0.5.22 BIOS"}
Output:("Poor","""Solution given "BIOS 0.5.22" but no description.""")
Example:{"prompt":"USB device on GC Board will lost while in use.","completion":" fixed on TBT FW rev 10"}
Output:("Poor","""No details in "TBT FW rev 10". """)
Example:{"prompt":"SUT show BSOD with stop code \"KMODE_EXCEPTION_NOT_HANDLED\"(DPPM: 600)","completion":" Fixed on QS sample"}
Output:("Poor","""Indicated "Fixed on QS sample" but no details.""")
Example:{"prompt":"CapsLock's LED always keep on when put SUT into MS","completion":" Verify DVT1 + BIOS v0.2.6 + Driver v06, issue cannot duplicated."}
Output:("Poor","""Only "Verify DVT1 + BIOS v0.2.6 + Driver v06" does not describe any reason.""")
{"prompt":"[ATS P1] SP13_LR_S4:SUT hang Dell logo w\/ circle (B1InitializeLibrary failed 0xc0000185) (DPPM:160)(VP 20H1 LR DDPM:100) -->","completion":" Fixed by BIOS updated."}
AI:("Good","""Solution given "BIOS updated" and issue was fixed.""")
Human: I give you all the show and tell but you're still stubborn!
Sorry i had to translate from Portuguese. If you can communicate in english i can help you here, or email me at joshbachynski@gmail.com
Hi i would not use fine tuning for this
few shot prompts on text davinci 3 can be quite intuitive when written correctly
i’d be happy to consult for you on this, email me or ask here joshbachynski@gmail.com
Josh, what would be the best fine-tuning and prompt input training to get real answers in personal faq? Example, know how to answer my age and personal wuestoes? Looks like you’re pretty advanced on this. Can you help?
Hello!
I wanted to build a Q&A ChatBot. I took OpenAI’s advice and made embeddings of my documents, and also followed their Q&A using embeddings tutorial ( https://github.com/openai/openaicookbook/blob/main/examples/Question_answering_using_embeddings.ipynb ) exact steps.
Unfortunately the header of the prompt which is:
“”“Answer the question as truthfully as possible using the provided context, and if the answer is not contained within the text below, say “Sorry, I don’t know”
Context:
“””
is not working as expected, because I receive a lot of “Sorry I don’t know” answers, although it finds the good document.
Can you please give me some suggestions?