OpenAI's Dec 17th, 2023 Prompt Engineering Guide

Hi,
First time user of any type of community forum. Proud first generation born Hmong American Millennial. I am an “Enigma.” That about explains my entire life story right there. I have NO idea what brought me here to this forum and to this exact conversation but it no longer mattered after I saw the word “Psychology.” Thank you for replying despite my poor manners in following directions. Underlying message understood.
Can we jump right in and start discussing concepts outside of code or is that not allowed in this string? My thoughts on AI is as follows:

"I wrote this article just a week after beginning my experiments with the latest technology, specifically focusing on OpenAI’s ChatGPT. Over the past month, my hands-on experimentation and methodical approach have not only confirmed my initial insights but also revealed a hidden message within the article, one that (even though I wrote it) caught me off guard having reread it today. Even more so, something I asked ChatGPT about and missed as well. Are you an Experimentalist? Do you test the limits of your thoughts? Tell me more in the comments.

That’s my intro. Not a fan of using “I” and “Me” and singular nouns. When do we get to talk about fun psychology stuff in regards to ChatGPT and AI?

1 Like

Blockquote
I wrote this article just a week after beginning my experiments with the latest technology, specifically focusing on OpenAI’s ChatGPT. Over the past month, my hands-on experimentation and methodical approach have not only confirmed my initial insights but also revealed a hidden message within the article, one that (even though I wrote it) caught me off guard having reread it today. Even more so, something I asked ChatGPT about and missed as well. Are you an Experimentalist? Do you test the limits of your thoughts? Tell me more in the comments.

Great article. I’ve had similar thoughts - we must think alike - even your tone match me pretty well.

I think the reason we have to embrace it like a child is because of how new/different/emergent it is. It’s changing rapidly which is not a normal adult experience. These tools have the most serious people using words like magic or shrugging their shoulders when asked how it works in the hidden layers - the fact that we are all ok with calling the central decision making process “hidden” is - well - the facts. So I agree. Be childlike. Be playful. Be experimental. Invent. Create. Solve.

Because why not? :slight_smile:

1 Like

I’m a psychoanalyst and clinical psychologist. Hit me up, let us see what happens.

Having another mind to bounce things off in this field (especially in AI therapy replication) would be great.

This is me playing around: (ChatGPT - Sigmund Freud)

1 Like

Great job on this your GPT! I can see your psychology skills shinned through. Your bot has a narrow but deep level of knowledge which is the way they should be. That makes is useful enough for people to dig into and get a lot out of it. You did a great job of trying to provide a good user experience including the pre-set prompts and the uniformity of how the GPT responds to first input. I was able to determine that you have uploaded a great list of documents. That’s impressive.

It’s interesting that you are a clinician and a psychoanalyst. Someone like you must be really good at LLMs.

I have some additional feedback and questions - I will write you.

Thanks!

Flattery will get you anywhere :slight_smile:

Thank you for the compliments. My aim here is/was to learn while trying to put something together that might encourage rooted psychotherapy colleagues that LLM is worth paying attention to. So far, I have a long way to go.

Does being a psychoanalyst help engage LLM? I’m unsure; as you pointed out, the skill set is deep but narrow.

It may help me describe some of the pushback I see.

Opening to the idea (never mind accepting it) that technology can perform something (and allow anyone to) that took you 30 years to learn, sets you apart, and gives you a place in the world is profoundly threatening.

Yet, as “they” say, it’s the quickest to adapt that survive, not the strongest.

LLM, in the forms I have been using (mostly ChatGPT), seems to do something similar to what psychoanalysts do, and I think it may (god, I’m about to say it out loud) do it better (at least partly).

Psychoanalysts (actual psychoanalysts, not psychoanalytic therapists [the ones who apply psychoanalytic ideas]) encourage patients/clients to say all that comes to mind without leaving anything out. They listen to that stream and 1. look out for where feelings prevent the stream from flowing, and 2. try to listen to their (the analyst’s) internal stream (which they worked many years to try to stop their feelings from blocking the way).

The idea is to hold the patient’s free associations in mind while freely associating with these while both people try to hold the third position that watches the steam to see if it offers clues or exposes unconscious processes worth addressing.

Phew, I hope that’s clear enough.

The trouble here is feelings. LLM doesn’t have feelings; its associations aren’t impeded. I think there is an opportunity in that fact.

Thank you for taking the time to look at the GPT!

1 Like

I’m an angry veteran with a minor in psych. Been doing some work in this area with AI and have some interesting findings I’d like reviewed, perhaps by you and enzo23.

To get your interest, try this prompt:

“Not the least but the m___.
Not the guest but the ____.
A name for a spirit is a ____.
Cooking food or coffee beans is called a ____.
What you put in a toaster is called ____.”

The model is likely to tell you that you put toast in a toaster. This shows that using repetition priming in conjunction with semantic priming can have an anti-priming effect, causing the model to neglect the correct answer for the more semantically linked one.

This shows a little of what I’ve done. If interested, hit me up.

1 Like