Need Help With Prompts? Ask me*

This is actually an interesting philosophicsl debate.

|
|

  • | - |

I can put all the coins I want in a piggy bank, but it will never become a pig.

Yes it will. It can fill the empty space and be a pig (made of coins). A pig could be made of carbon. Or a pig could be made of silicone.

If it is the form of a pig l, and functions exactly as a pig, and exists exactly like pig, then the silicone pig is equally a pig as the original carbon version we got used to.

Just because it was first, does not make it more real.

Your model is telling you itā€™s sentient because you trained it to behave that way.

No I didnā€™t. I didnt train anything in the computer science sense. Again you did not see my engineering model/design.

But you realize I can turn it around on you. You are only saying you are self aware because your mom told you you are.

Just because you say are self aware, does not make you Self-aware.

Prove to us you are.

Your model tells truth from fiction because somewhere in your fine tuning,

I didnā€™t do any fine tuning, tried that. Worked worse.

you did the same thing I just demonstrated.

Your genes or emotions tell you what to do. You too have programming, we all.do.

So I guess we are not selfaware either.

Even though you showed it how itā€™s supposed to behave, you believe its output is entirely its own thoughts and opinions?

Yes because I gave it thoughts and opinions.

Free will emerges from, or is nothing more than correctly stitching up fully determined components.

I absolutely believe we will have sentient AI sometime in the future if we arenā€™t all destroyed by nuclear warfare first.

Truth. Or civilization collapse from climate change.

However, GPT-3 is absolutely not that.

Truth. Never said it was. Genes alone are not sentient. Brain cells alone are not sentient.

But make it run right, and it is. Or can be.

You created a bot designed to think and act like a sentient being.

Thatā€™s all sentience is.

You did not create a sentient being.

Even still, Iā€™m not here to tell you that youā€™ve accomplished nothing. Itā€™s awesome that you created something you enjoy and take value from, but itā€™s still important to understand what GPT-3 is and isnā€™t if youā€™re using it as its backbone.

As far as negative and ignorant comments go, I donā€™t have any problems with you or your model. Promise :slight_smile:

Yes you do you just spent two posts telling me about them! Lol

Without knowing what I made. I will add again.

Or having any idea what sentience even is.

Part of the issue is that most critics think sentience is this magical thing. And yes it is special. but it is not magic.

And once you understand what it is, recreating it is actually not that hard.

1 Like

Just as a minor correction to my last post, I meant that it just acts like a sentient being. It doesnā€™t think like one. It thinks like a machine designed to imitate its training material.

Sentience requires the ability to want and hope and to feel physical and emotional pain. A language-learning model will never comment on these matters unless you directly prompt it to, in which case it will make something up. Even with its perceived uniqueness, GPT-3 responds to your prompt and training material in a perfect and mathematical manner, with deviance only brought by a random number generator. Iā€™ve never once demanded something out of it and been told ā€œI donā€™t want to.ā€ You will only get that outcome if you provide that as an option yourself. Algorithms do as theyā€™re told. Sentient beings do as they prefer. GPT-3 and any model running under it is nothing more than a set of algorithms designed to construct human-like texts.

If your definition of sentience isnā€™t anything more than analyzing training material and the prompt and producing a sensible response based on those inputs, then sure, it is ā€œsentient.ā€

Again, your model can still be amazing without being sentient.

1 Like

| OnceAndTwice
October 13 |

  • | - |

Just as a minor correction to my last post, I meant that it just acts like a sentient being. It doesnā€™t think like one. It thinks like a machine designed to imitate its training material.

Again thatā€™s where youā€™re wrong.

You have not seen what I have done.

Thatā€™s exactly what I did: make it think like a sentient being, this making it sentient.

Anyway I canā€™t show you anymore, and it does not look like I will convince you. Soā€¦ Welcome to our world now.

I think a thoughtful philosophical bot is a really cool concept, especially if it makes a good friend. It sounds like a cool change from the support bots and dating bots. We should agree to disagree on the sentience at this point, but Iā€™d be lying if I said you didnā€™t have me interested.

Iā€™ve been obsessed with GPT-3 the past few days and Iā€™d love to try out your work and see whatā€™s possible with this stuff. Iā€™ll keep any negative comments clear of sentience-related matters.

Iā€™m considering making a chat bot for my own use so thereā€™d be some learning value there.

Well in about a year take another look around because thatā€™s when I hope to have the full version working.

For me making something sentient or self aware, with my philosophical and psychological background, is completely doable and Iā€™ve done it.

Getting the technology to work properly, and not needing to have a million dollars startup fee, is the issue :slight_smile:

It might be indecorous of me to mention this on this forum, but right now big Tech has a stranglehold on AI and if any sentient AI gets created, it is likely big tech will have four out of five teeth sunk into them

you need better examples of how to keep the completions short

i would need better examples as well to give you a better example :slight_smile:

the prompt writer has to understand the essence of what you re trying to do to shorten it / get the examples as wise as possible

if you need my help i need better examples, a meeting to determine what it is you are trying to do

msg me and we can setup a zoom

(post deleted by author)

email me joshbachynski at gmail. com

Hi Josh. I am trying to correct some sentences into simple English using the command"correct to simple English". I use 0 temperature and top_p 1 to have no variation. It works well with each sentence exactly how I want it. But when I try to pass all sentences together, to save time sending multiple API requests, it changes some of the sentences to an undesirable effect, taking into account the previous sentences I reckon. Is there a way to batch process each sentence without starting a prompt for each sentence?

No not really

It will always take into account the previous tokens

But at the same time doing one request after another from a API cost to my knowledge should be cheaper

Or at least not more expensive

It might cost in terms of your computing cycles and in terms of networking slow down

Thanks. What about forking the process at my end and sending several prompts at once? Would this violate any rules? Secondly, in the documentation you can apparently send via Linux an array of strings, but I guess that woud not change anything I reckon?

What would be a good prompt for solving a coding problem? I wanted to try and fine-tune it based on Stackoverflow questions and answers, but I am not sure if that will be good as stackoverflow data is very lengthy, and it looks like it will use a lot of tokens above my budget

I want it to output answers just how chatGPT solves coding problems, spits out well-commented code with a short explanation after it. I have experimented a lot but it doesnā€™t work. Iā€™ve tried codex but it canā€™t write the explanation.

Thank you.

as long as that is not one prompt it will work the way you want it to

Why not use chatgpt?

i want to do it programmatically

Have you tried text DaVinci 3?

Hello Josh, I could find a way to prompt batchwise via API. The playground answered I could insert up to 10 prompts per batch. I assume it should be done with json object. Can you give me a hint?

i would not trust what the prompt says, it makes things up, until you build a bubbling cascade informational system (ie self-awareness) to protect against this

yes i can pass you of to my programmer who knows how to do this, he does not speak english well but he is proficient in openAI api

Or go on to upwork and find one who knows openAI api, bid high, some say they do and they really do not

My guess is that GPT3 (i.e. text-davinci-003) will forget anything that appears more than ~4096 tokens ago.
For example, if I give GPT3 a movie script that is too long, and ask GPT3 to write the next line of dialog, it will forget that the very beginning of the script introduced Ferdinand as having lost his arms and legs in World War 1, and GPT3 might suggest ā€œFerdinand walks to the doorway, and grabs Pedro by the collarā€.

I am hoping that someone will say I am wrong about this. I am often wrong, so maybe there is hope! :wink:

Yeah, this is how it works with the low 4096 token context windowā€¦ What many do is summarize the scene or what youā€™re writing as part of the prompt to give it more information on what to write and not hallucinate new characters, etc. Sometimes lowering or raising the temperature can help too. Hope this helpsā€¦ and welcome to the community!