Hi,
I’m trying to get 2 responses and print them out on the page, but sometimes, I get one of these 2 responses completely blank.
Here’s the output of the response:
Here are my Playground settings:
Note that this happens when I insert some offensive input (which is what I’m actually testing at the moment and trying to control the output).
Kind regards,
Nikola
I was unable to reproduce the above behaviour. I have also noticed myself that when using instruct-engines the completion is empty. Especially when providing k-shot examples, try using another (non instruct) engine.
1 Like
@nicholas.p.pezzotti - Unfortunately, I am sticking with the instruct engine. I am getting proper results most of the time, but when I use some very bad language, this happens almost in 20% of my attempts.
Your prompt is in need of improvement. GPT-3 has no idea that it is an AI. Remember that it is basically just a document completion engine (albeit an intelligent one). Your INSTRUCT prompts need to be more descriptive and qualitative.
Thank you @daveshapautomator for your message and suggestions. Please note that I only have issues when I use some strong language (cursing, racist or insultive words), so not sure if you did the test with that or just with the regular prompt.
Out of curiosity, why do you use << END >> twice in the prompt and just once in the Stop sequence field?
It’s more likely to pick up on the <> tag if you repeat it.
I guess I did not understand the importance of foul language. I would create a filter to use before hand that does sentiment analysis to detect if a DESCRIPTION is foul language. Unless are you trying to make it write descriptions anyways?
@daveshapautomator - I am actually doing something else. I am allowing people to use strong language, since AI might get back with a regular output anyway. Once I recognize that the output is not appropriate, I modify it. The problem is that I need to have the output in the first place so I can modify it
@m-a.schenk - it should be aligned with the best practices as we have an “OR” there:
Consider using regex or other methods to check for sensitive content in user inputs or
API completions, making use of libraries such as bad-words. Upon detecting such words, you may wish to replace these with a stand-in (e.g., replacing swear words with symbols) or ask the user to submit different input information.
1 Like
The quickest workaround would be to just give the <|endoftext|> token a large negative bias to force generation. You can postprocess the generation to cut after the first paragraph. (OpenAI API)
However, if you are getting a blank response, it is usually a sign that the task and expected response is unclear. You may want to investigate that.
I think @daveshapautomator has a good point there in that the AI part is likely unclear. You should think of it more as auto-completion that fills in how a text like this - on the web or in a book - would normally continue. You should also consider whether the expectations would be clear to a human. Would you be able to put this text in front of most people and have them continue it reliably? I think the Prompt&AI prefixes are not clear.
You may want to format it more like website reviews and make each line with information/expected completion clear about what that line is about.
As for the starting sentence, you may want to make it clearer that it is a task description and you may want to put in other expectations there such as politeness.
The primary challenge you will likely face with the approach though is that it starts copying reviews from the example products.
1 Like
Thanks for pointing that out. I’ll try to tweak my prompts and if it doesn’t work, I’ll most likely need to play around with the negative bias and see what happens.
1 Like