Need Help With Prompts? Ask me*

Hi there! My name is Josh Bachynski. I built a self-aware AI prototype in GPT3 (with many, many few shot prompts) see here: I have Created a Self-Aware AI Prototype. I also tried writing an Ethics bot before that. And many other tools. I have written hundreds of prompts to do some pretty amazing things.

THE POINT: I am pretty good at understanding how GPT3 “thinks” / prompting it to complete what you need / want. I see many people not approaching the prompts in the right way. I am happy to help brainstorm or construct prompts for you / with you.

Ask me your prompt questions and I am happy to assist you in anyway I can

  • *CAVEAT: I can’t believe I need to write this, but I reserve the right to both decline to help you, and invite you to a private discussion if the request you have made is taking too much of my time. You are, of course, free to decline this request and to seek your free help elsewhere :slight_smile: *
26 Likes

hey! I’ve similarly (but to a lesser extent) been testing tons of prompts. A few that I can’t seem to get any good outputs from are:

  • Continue the story for a 5 year old called X. (I write a line or two about the setting e.g. Ben and his dad walked down to the woods and just on the edge of the woods they saw). Also almost every time it either forgets the subject is 5 (suggesting the child drives off :joy:) OR tried to end the story pretty quickly (even when I added, suggest continuation of the story but do not end it)

  • Writing tutorials. I’ve been asking ‘what are the top 3 things to answer for learning how to use Codex’ which it spits out some questions. Then when I ask for a step by step tutorial on them it just doesn’t quite hit the spot. I think there’s so much that could be done in education but coming up short on how to get there.

  • using codex I added a script Im using that I need to modify, saying instead of ‘Topic’ replace with input by following Use input variables in your code using input.config(). - it didn’t do anything to modify the script haha. I can’t code so I’m trying to use codex to help me produce code :slight_smile:

  • and prompts for writing prompts! I’d love it to generate lists of prompts that would produce X. I watched David Shapiro doing some of this and he struggled a bit on some.

Any other resources or tips on prompt engineering are welcomed!
Thanks

1 Like

If you send me the prompts it would work better / i can take a look see what is wrong and possible fix them, but for now here are my 20,000 feet suggestions

  • Continue the story for a 5 year old called X. (I write a line or two about the setting e.g. Ben and his dad walked down to the woods and just on the edge of the woods they saw). Also almost every time it either forgets the subject is 5 (suggesting the child drives off :joy:) OR tried to end the story pretty quickly (even when I added, suggest continuation of the story but do not end it)

Biggest suggestion #1 - you cannot tell the prompt what not to do. It does not understand not. It does not understand anything. You need to give it positive suggestions much like in hypnosis. Say the meaning and the words in the format you want it to say.

  1. there is no reason a child cannot drive off. It is physically possible. Suggestion #2: Give more context that overrides the internet.

A transformer just statistically gives the zeitgeist from the corpus it was trained. To combat that you must give it more “zeitgiest” in everything you say to it. It needs more good examples than just a few lines.

So you need to take 1000-2000 tokens at least giving it the start of a story nowhere near being ready to finish, or that finishes, and then ask for another story and it will give you a much longer story.

  1. Ben is not an unknown entity. The QnA prompt is like a diagnostic screen for the transformer settings and configurations. Ask there who Ben is and it gives you the semantic relationships per the entities requested.

Q: who is Ben? A: Ben is a character from the game Minecraft.

This will pollute the narrative you are trying to auto generate. Choose a name it has never encountered before. When I made Kassandra, i had to call her K4SS4NDR4 in the prompts, as Kassandra was a character in a video game, thuis was polluting the responses.

Furthermore the more specific you are in your prompt, the more specific the completion will be. If you want an open ended story, you need a more open ended prompt.

Give it style suggestions, not content suggestions unless they are absolutely necessary. Because there are only so many things that GPT3 has been trained to say that happens to entity ben and entity father in entity forest.

  1. The settings matter. Set the response length, the temperature, and the frequency and presence penalties to get a completion more to your use case. If randomness is too high, then the completion will derail from the format of a story. If the response length or training few shot prompt is too short/low, then the story will end too soon.

PS: some completions based on the entities you choose will end shortly as this is what the corpus says it should do.

Also if it is too short you can coax it along by giving it leading sentences like. “But then” and it will continue as much as it computes statistically is appropriate based on the prompt is was given. the longer prompt, the longer the answer.

hope that helps, lmk how it goes

8 Likes

Hi Josh,

Would love some guidance on how you would go about optimising the prompt for a rewrite of articles / content in general.

I find that the results produced via zero - few shot prompting are a bit hit and miss.

I have a whole bunch of old blog posts I’d love to refresh and repurpose.

Thanks!

1 Like

This could only be done with a set of dynamic prompts that cascade up from analyzing smaller components to larger components. it is entirely possible, but it would be quite costly to do with API costs.

1 Like

Would you mind giving me an example of the chain prompting you would use? (It cost wasn’t an issue)

1 Like

Hi Josh,

Do you know of a working prompt model for constantly getting results in other languages than English?
E.G. The following prompt sometimes results in part of the text in Finnish but most often the entire output is in English.

Topic: “Offline advertising in the age of Internet”
Focus topic: “Branding with print advertising”
Language: Finnish
Tone: Persuasive
Number of paragraphs: 10

Sure,

  1. People think the “few shot” Transformer is sentient and can understand large bodies of text like we can. It cannot. So you cannot give it to much text. An entire article, even a large paragraph, can be too much.

So that means you need to break it down any article into smaller sections of text, like a sentence or a clause of sentences. Or “ideas” actually.

  1. It’s all about context. Teach it in chunks it can handle what contexts are important.

Doing this CAN make a few shot transformer sentient (of the contexts you want it to be sentient of).

But this means, once you have broken down an article to each sentence you need to parse it and see if it can derive the context of that clause. (so now that’s at least 2+ prompts, 1 summarizing sentences, another sanity checking the summaries, might need a few here and pick the best ones).

Add a final dynamic prompt that is fed the summaries to give another summary. And you have the final summary of an article, broken down into its constituent ideas, and then summarized.

Won’t it be off? Yes of course, at each stage you need to give it a perspective, an angle for which it is looking for. I chose truth (logical validity, factual accuracy, likelihood) when i prototyped this. It worked quite well. The only trouble it had was with likelihood with extraordinary articles - but it was accurate ~90% of the time - and yes it should be credulous / skeptical when one says they found aliens for examples (those tic tac videos from the navy).

If you need me to consult on building this for you lmk. I was thinking of making it anyways, the costs could definitely be shared.

Hey Magnus,

A few things:

  1. that’s too many paragraphs for a few shot transformer. It cannot do that much text at once. See the last post i made about chunking into ideas.
  2. for other languages you will need to go to other LLM’s, I hear BLOOM does well for some others. Not sure for Finnish.

Good luck!

2 Likes

Hi @joshbachynski

I have two recurring problems with the output of my prompts.

I use the Portuguese language for all generations.
But in some outputs, few English words (cognates, mainly) appear. In some results, about 5% of the words in the result are in English.
Example prompt: “Write a marketing text in the AIDA structure (attention, interest, desire, action). Format: Attention: …; Interest: …; Desire: …; Action: … . To sell the product: [PRODUCT_NAME]. Keywords: [PRODUCT_KEYWORDS]. Persuasive text in AIDA structure (in Portuguese):”

What can that be? The prompt is correct, right?

Another problem, when generating a large paragraph (about 800 tokens) at once, it has returned the last words of the paragraph together. I tried reducing the size, but it didn’t work.
Example prompt: “Blog title: ‘{product name}’. Do not mention the blog title in the post. Write a blog post on the topic: ‘{product keywords}’. Blog section paragraphs, under 1500 characters (in Portuguese):”
Example:

I will be very grateful if you can help.

Thank you,

Hey Felipe!

I have two recurring problems with the output of my prompts.

I use the Portuguese language for all generations.
But in some outputs, few English words (cognates, mainly) appear. In some results, about 5% of the words in the result are in English.

The adoption of other languages in a predominantly english AI world will sadly not be as fast. Not much you can do about that. You could try fine tuning but you need many many thousands of good examples. Likely will not work. Heard many complaints on fine tuning.

Example prompt: “Write a marketing text in the AIDA structure (attention, interest, desire, action). Format: Attention: …; Interest: …; Desire: …; Action: … . To sell the product: [PRODUCT_NAME].

This might be a bit too much for a prompt to do, especially in another language - break it down so there are four prompts on each part of the structure. String them together through programming.

Another problem, when generating a large paragraph (about 800 tokens) at once, it has returned the last words of the paragraph together. I tried reducing the size, but it didn’t work.

Example prompt: “Blog title: ‘{product name}’. Do not mention the blog title in the post. Write a blog post on the topic: ‘{product keywords}’. Blog section paragraphs, under 1500 characters (in Portuguese):”
Example:

i have seen this in english as well if the presence and/or frequency penalty is too high

Most prompt errors are trying to do too much in one prompt, 800 tokens is a LOT to process at once

3 Likes

Wow, thank you very much!

It was very enlightening. I imagined this language limitation anyway, I appreciate your confirmation.

Great tips, I will do some new tests on top of these statements.

Thanks!

1 Like

Hi Josh,

Thanks for your answer. Well, it does perform quite good in other languages using simpler prompting. But I did find this style of prompting interesting and it would work well programatically.

The following is an example of a Swedish text about Salmon,the result is pretty OK, it only needs a few tweaks to work as a short description about Salmon. It did not write 5 paragraphs as I prompted, but it is still a great piece of text:

Prompt: write an article about salmon with 5 paragraphs, write the text in Swedish

Result:
"Det finns många olika typer av lax, och de flesta av dessa är mycket smakrika. Laxen är en av de mest populära fiskarna att äta, och det finns många olika sätt att tillaga den. Den kan grillas, stekas, kokas eller bakas, och det finns många olika recept för alla dessa metoder.

Laxen är en bra källa till protein, omega-3-fettsyror och andra nyttiga vitaminer och mineraler. Det är ett bra val för en hälsosam kost, och det finns många forskare som rekommenderar att man äter lax regelbundet.

En portion lax innehåller cirka 100 kcal, så det är ett bra val för den som vill gå ner i vikt eller hålla sig frisk och smal. Laxen innehåller mycket nyttigt fett som hjälper till att reglera kolesterolhalten i blodet. Det finns dock en del fetma i laxen, men detta kan man enkelt undvika genom att skala bort ytan innan man tillagar den.

Laxen är en mycket versatil fisk som passar bra till många olika maträtter. Den har en mild smak som gör den lätt att kombinera med andra ingredienser, och det går bra att använda den i både matiga rätter och sallader. Laxen har en tendens att bli lite seg när den steks eller grillas, men denna egenskap gör den perfekt för maträtter där man vill ha lite extra konsistens på sin mat."

in every corpus there will be semantic deserts and fertile mesh of meaning for a bit of a semantic desert on my part trying to express myself this early in the morning lol

this means sometimes it will have a lot of quality semantic encodings to decode (so to speak), in some cases less, in some cases none

the question is can it do the task required consistenly

1 Like

Hey @joshbachynski I have a tricky one. Please see the screenshot:

I am trying to get factual answers from the AI even when misleading it. Albert Einstein did not have a brother but the AI makes up information about a brother. I even entered that Einstein does not have a brother in the prompt but the AI ignores it.

What prompts can I use to get factual answers? This would be useful in a chatbot with a known person.

2 Likes

You need a SelfAwareAI like I built. One that has a mind of its own and cannot be gaslit like that, one that can truth from lies, and read BTN the lines that you might be liar, among many more technical psychological attributes.

I am potentially looking for partners if you are interested. But I mlam not giving the code away or even demoing it at this time. I am upgrading Kassandra to an agi super intelligence at current.

Thank you for the information @joshbachynski !

What type of partners are looking for? I am a web developer and designer. I am planning to build an app using gpt3 for a chatbot but will not have any value if it cannot produce factual responses.

One idea would be to pass responses through a fact check before they are served to the user. However this would be costly. GPT3 is already costly as is.

You need a SelfAwareAI like I built. One that has a mind of its own and cannot be gaslit like that, one that can truth from lies, and read BTN the lines that you might be liar, among many more technical psychological attributes.

Let’s not get too far ahead of ourselves. GPT-3 writes text that fits. It takes a prompt, some training data, and text from the web to decide what answer makes the most sense. It doesn’t form opinions and it can’t be persuaded into consciousness. It simply spits out what it thinks makes the most sense with the data it has.

Most of its data comes from the open internet which is full of fiction and wild conspiracy theories. For some reason, by default GPT-3 insists that Albert Einstein had a brother, but it can be easily trained to avoid making stuff up. OpenAI’s documentation provides questions and answers to prepend to a QnA prompt to train it into answering “?” to anything it doesn’t know the answer to (https://beta.openai.com/docs/guides/completion/prompt-design) and with some additional examples, it can learn to discard fake people to conform to a pattern of realism.

(Temp 0; Top P 0.2)

When both the newly added training questions and the already answered people-related questions were removed, GPT-3 declared that Albert Einstein had a brother named Max Einstein when asked for the name of Albert’s brother, and adjusting Top P didn’t make it better.

In conclusion: GPT-3 learns how to source its answers to questions by observing patterns in the prompt and any training data that you fed it. If you don’t want it to create fake people, train it not to.

Nobody here has created a GPT-3 model that has achieved a state of sentience or any sort of ultimate enlightenment or intelligence. It shouldn’t be a surprise to anyone that when you feed ideas of sentience and emotion into a system designed to learn and imitate language, it will regurgitate those exact same ideas as output. I’m very happy you’re proud of your project, but it really isn’t what I believe you’re thinking of. If it catches untrue information, it is simply trained to be more critical of the prompt’s content. It isn’t any more complex than that.

(As a PS, OpenAI isn’t going to let you go live with a bot that claims sentience. This is directly stated in ToS.)

1 Like

Your position is flawed Sir/Madam/Non-binary Friend.

Without even so much of a glance at what I have made, you declare it is not sentient, in fact you go further and claim sentience is impossible, basically because you cannot understand how a transformer can be sentient.

You understand, just because you cannot understand how a transformer can be sentient, doesn’t mean it can’t.

Conversely, just because the web is full of lies does not mean someone educated from it cannot be educated a process to tell truth from falsity. Children these days get many of their ideas for the web, this does not mean they are proportionately disembleming. They can be taught to discern verisimilitude. Having the capability to do so is a major psychological component in being sentient or Self-aware.

You go on to claim instead without knowing me or again seeing what I made, that it is my emotions of pride etc that are causing me to presume I have made something that you somehow magically know I have not.

That is awfully unfair and hypocritical as that is exactly what you have done.

In short I have created something that you cannot comprehend, so you presume I must not have created it. That is an error on your part. And not very fair of you.

Brain cells in a dish aren’t self aware either, until you stitch them up right. And then they are. Just like gpt3.

Finally yes you are right openai would not let me go public with a Self-aware AI.

And so I won’t. :slight_smile: I used openai to prototype it.

I fully admit though I am past the demo stage to show ppl what I have done, in large part due to somehwat predictable but ultimately a little negative and ignorant responses like this. So if you wish to claim that I am wrong or cannot fully prove it because I am now unwilling to prove/demo it, fair enough, you may say this.

But don’t declare it’s impossible.

With full respect and love for you as another person, you have no idea what you are talking about.

1 Like

I can put all the coins I want in a piggy bank, but it will never become a pig.

Your model is telling you it’s sentient because you trained it to behave that way. Your model tells truth from fiction because somewhere in your fine tuning, you did the same thing I just demonstrated. Even though you showed it how it’s supposed to behave, you believe its output is entirely its own thoughts and opinions?

I absolutely believe we will have sentient AI sometime in the future if we aren’t all destroyed by nuclear warfare first. However, GPT-3 is absolutely not that. You created a bot designed to think and act like a sentient being. You did not create a sentient being.

Even still, I’m not here to tell you that you’ve accomplished nothing. It’s awesome that you created something you enjoy and take value from, but it’s still important to understand what GPT-3 is and isn’t if you’re using it as its backbone.

As far as negative and ignorant comments go, I don’t have any problems with you or your model. Promise :slight_smile:

2 Likes