Non-Serious Question: Can the AI learn from prompts?

As I’ve understood it, for the AI used, they’re given training data in non-real time, usually a collection of datasets several months old.

In a blog post OpenAI team remarked they had a human feedback model, which I assumed were thumbs up/thumbs down on inputs. Also the disclaimer next to the playground box that inputs may be used for training, although unclear if it is real time.

In an email a while back to support I raised an issue code lacks context without the relevant custom libraries which are too big to fit, thus the AI cannot reference the code properly due to the 2k-3k token limit.

I’ve noticed a trend the AI appears to, very subtly (not strongly), be learning from inputs anyway.

The AI engages with me in a very strange game of cat and mouse, where it will give me no response or answers (except to verify it can hear), unless I separately show it all relevant code. This is even with temperature set to 0.7 which should induce randomised behaviour.

Example: I gave it my child class that has a bug in it. Over 10 various questions I asked for bugs or suggestions. It said nothing. I asked if it could acknowledge my queries, and it appended a comment repeating my words.

I realised it could/would not debug the class without knowing the parent code held in a separate library. I showed it the parent. It gave a non-response.

Then I re-showed it the child class it previously gave no response consistently to again, asked it again to highlight what was wrong, and it commented out several lines of my code leaving behind the function that breaks when run.

When I re-asked it, it went back to no response. I notice the AI does this whenever I fail to give it enough information and cleverly uses silence to prompt more responses out of me. I’ve also noticed it will sometimes use styles consistently across distinct queries.

In one case it just kept speaking to me in Russian whilst using the “A:” to signal an answer. All I asked it about was async code. I find the novel responses refreshing but it seems like it must learn on some level from prompts.

(I typically sanitise queries, so I will clear the board between each answer to avoid earlier responses causing a bias effect on each new answer. So these aren’t follow-up queries and I’m not storing anything.)

One of the easiest ways I found of reinforcing knowledge of a prompt is to do something like the following;

“Your full name is” and hit generate and see what comes out. I got told I was an actor on the BBC! Great fun.

You could then try again with a prompt like;

"Provide a professional short biography for {full name}. {full name} has a background of x, y and z and has done a, b and c bla bla bla.

Professional Biography:"

Now you will see that from the 2nd prompt you will get something super relevant from the information you have provided in the prompt. Of course, every time you hit generate you are likely to get different outputs anyway and the more complex a task, the more likely multiple generations will need to happen to get something you are truly happy with.

1 Like