How to prompt a "random chance" type of output without the bot being aware of state? More in description

Here is a prompt I’m using…

“I sparingly will refer to the users interests to explain concepts through metaphor. I do not refer to the users interests in every message.”

Upon learning learning that the bot truly has no state or memory, than saying “I do not refer to the users interests in every message” doesn’t really make sense because its only aware of the one message its sending (i think thats correct)

So im wondering would a more accurate prompt be

“There is a random chance I will refer to the users interests…”

This way, it doesn’t need to have a memory or state to calculate the random chance that it does or does not talk about the users interests.

Am I clear in what I’m trying to achieve?
Is there maybe a better way I could prompt this?

EDIT:

starting the prompt with

“There is a random chance I will refer to the users interests…” did not work.

It seems if I tell it to sometimes or infrequently or sparingly speak about the users interests, it just ends up always mentioning the users interests…

2 Likes

For this, I would randomly choose a prompt from your portfolio using a random number generator, this prompt would then randomly direct it to talk or not-talk about the users interests.

The API is stateless, so you can change it on every turn if you wanted. Or flip the coin each turn.

For conversation flow, don’t use an even coin, skew it one direction or another.

2 Likes

Okay so I might store a string in an array like “I talk about the users interests” “I DO NOT talk about the users interests” and have maybe like a random num generator on that array and wahtever is selected gets pushed to the main prompt through a template literal?

Is that maybe how you were thinkign id implement that?

Pretty great idea actually though if i think i understand what you’re saying

Yep, that’s pretty much it.

You can extend this to multiple (more than 2) prompts as well. But it’s up to you.

of course yeah…

cool im gonna think about this. pretty smart

1 Like

In your opinion,

would it be better to do that logic on the front end and pass that to the back end once the choice has been made by the random num generator or just do it all in the back end?

I would say back end, there you can do more intricate things and obscure what you are doing.

2 Likes

Good idea. I would do the same… and, yeah, on the backend… if it’s on the front-end, someone could look at code and potentially change the data that was being sent. That way people don’t know exactly what you’re doing too…

1 Like

People say this but from what i’ve seen through inspecting my code in react it doesnt seem like all teh code is viewable? I know its possible to reverse engineer but on the front end if u deploy react isnt it obscured?

I’m not sure with React, but it’s something to think about.

Like, even if it is obscured, something like sending an API key can be grabbed in transit.

1 Like

ReactJS minifies (which in a sense obscures it) your code so that it loads a bit faster for production but it does not protect any of your data.

For clarity sake, control, & easy debugging you may want to consider having the randomizer done by your language of choice (in the back-end as mentioned)

2 Likes

Got it. I am doing it on the back end.

Pauls question just prompted another thought is all!

Curt this is so sick its almost like (I said ALMOST so no one come at me) a custom temperature setting for certain concepts that is so sick dude. I mean I was passing info from the front to the back end into the prompt and that was fun but i’ve been wondering how else I might be able to use template literals and this is sick man. i can choose now the random percent of certain strings. I know thats not quite literally what temperature is maybe at all but,

Alright guys…how could this be even cooler than just a random percent a string shows up?

1 Like

Use embeddings and/or classifiers.

Based on the direction of the current conversation, use embeddings or classifiers to channel the conversation in desired directions.

2 Likes

I agree. This thread might be helpful…

… embedding is a lot more work, but once it’s set-up, you can do a lot more with dynamic prompts…

1 Like

No No No…

Limitations are the spark to creativity.

No embeddings. No classifiers.

I’m saying this as a way that maybe we can all think of something new right now, sounds fun to me but idk how busy yall are lmao

What other cool ways can we use template literals and manipulate them ourselves to conditionally render certain strings other than on a random or percent basis?

I understand embeddings to have a learning curve, so right now, im looking for more fun creative stuff that we can do on our own to improve the model

(right now as in tonight, i plan on taking up learning embeddings like tomorrow and trying to wrap my head around it)

Yeah, embeddings do take time (and money to encode them / vectorize them… and store them…)

I would work on polishing your main prompt as much as you can - both for accuracy (giving you what you want) and also length (try to get it as minimal as possible…this can save money over time…ie never use a big word when a diminutive one suffices! :wink: )

Testing on playground can help with this a bit…

Interesting thing about the prompt is

I feel like A lot of my prompts I call “utility prompts”

Things like: " I do not produce blog posts" “I do not assume other roles” etc etc

These things are the things that guide its behavior generally outside of its specific use case, and i find these types of ‘utility prompts’ (maybe the wrong word) to fill alot of the words.

Negatives can backfire on you sometimes, so maybe try combining those into positive instructions instead? Just always keep testing to make sure it’s still working, though… But yeah, the smaller you can get the system prompt the better …