Pre-trained App environment, and user specific evolving environment

Hi Guys,

I’m new here, looking to see if GPT-3 can fill out our needs.
I played pretty much at IA Dungeon prompts to explore GPT-3 abilities and reached a very educated interaction status.
For example, I taught the AI about my venture, then spoke with it about the pros and cons, the business potential, the best use cases, business models, etc., and even got it to offer creatives for our advertisements and elevator pitch texts for our investors’ presentations.
Then spoke with it about another subject and then asked it if it is familiar with my venture, and it remembered it and discussed it again.

That led me to believe that I can build “a world” where the AI will be familiar with certain data so that any free discussion can be contextually related.

However, by reading in the forum and going through the API, it seems that the data cannot store it, so there is a need to educate the AI from scratch for every interaction. I thought about file uploads, but that just provides raw data without the real training about it.

My goal is to enable two things:

  1. Have a pre-educated response to a first prompt (a trained AI with contextual knowledge).
  2. Afterwards, to enable each user to have their own interaction, which will continue from the last discussion so the AI will develop knowledge about the user.

The example use case would be a customer service interaction. This example exist at the playground, but there is no mention how to train him to answer based on real data and undestanding.
The AI needs to know about the product/service, so it can explain it truly.
Later, if a user interacts again, the AI should remember the last conversation so it will not be repetitive and will be aligned with the situation.

Here is a visualization of the concept:

Therfore my qastion is - is there a way to perfom such task, and if so, what would be the approach to do it?

Thanks in advance.

N.

4 Likes

It’s really hard with the 2048 token context window, but yeah, you basically have to fit the “memory” into the prompt each and every time… at least currently.

It means that the cost to create a context for the interaction is pretty high.
If I’m copy-pasting the 2048 token text from the training session, would the outcome be the same?
Meaning, is there anything behind the scenes that will get lost from the original thread?
I mostly talk about AI training and behavior.

I’m writing a book about this and plan to self publish in the coming months. I’ll let you know when I do.

1 Like

Sounds great. Thank you!

1 Like

Hey Dave,

Any update on the book? I’m really looking forward to reading it.

1 Like

Sounds great what you are wanting to achieve. I hope there are people to help you with your project.

Not sure I can bring anything to the table. :confused:

I should be getting the first proof tomorrow or the next day! Just a few more things to polish up and then it will be ready.

Awesome. I’d be happy to be a beta-reader for you. :slight_smile:

That’s very kind and if you had been around a month ago I might have agreed but at this point it’s down to formatting and line edits - making sure it looks right on print and there aren’t any typos.

1 Like

I understand. Just very excited. I’m just starting to learn about all this stuff and your comments have been very insightful. I’ll be the first one to buy your book once it comes out. :slight_smile: