Fine-tune GPT Prompt With Your Own Data

I have this same type of app as a Ruby-On-Rails project on my desktop. However, I did not add a function to read PDF files (easy enough to do) and only input text indirectly from completions, direct input forms, etc.

As a Rails app, it uses a database to store all the prompts, completions, vectors (embeddings), the model used to create the vector, a “topic tag”, etc.

I also use this desktop app to compare vector searches with full-text DB searches, etc. If you do this comparison on text, you will find that vector (embedded) methods are not always optimal compared to full-text searches, especially when the length of the text is “just a few words”. For longer text, embedding vectors searches perform OK, but they fall short for short phrases and few words.

2 Likes

Agree, short phrases are not ideal. I find somewhere between 100-300 tokens are a good cut off.

2 Likes

Yeah, and that’s saying it very politely.

:slight_smile:

2 Likes

To illustrate this, I have a DB with hundreds of completions. Before, when there was an API error, I initially put “No Reply” in the DB.

Then later, when testing, for example “Hello World” as a search term:

You can see that “No Reply” is “way up there” in similarity. So later, I changed it to:

There was no reply from OpenAI. There could be many reasons for this problem. The most common problems is that the OpenAI API cannot handle the load.

…and of course, that ranks much lower:

However, if I just do a simple “Hello Word” text search in the DB I get even better results, as we mentioned.

In other topics where we have seen people discuss using embeddings with short phrases or even a single keyword and that is not a good idea (as @nelson agrees) ; but it’s easier to understand if developers test various search methods / types. Currently I test (compare and contrast) 3 search methods:

Sometimes, I test correlation (ranking) methods, just for fun:

I’m not sure about others, but it seems prudent to actually test and understand and compare search methods before “jumping all in” with embeddings for many classes of searches and use cases.

4 Likes

How did you do this? I’m paying money for something like this and no one can figure it out

Hi Drew,
It’s a combination of reading unstructured data from documents using different downstream tasks, creating a database of embeddings and using the OpenAI Completion API. Feel free to message me if you are interested in testing it out.
Cheers

Hi Nelson, I tried it out and gave you some feedback via linkedin message. I am wondering if I missed a step though, since it didn’t seem to me that the app was relying on my library. Is one’s library ready almost instantaneously? Mine appeared to be but that seems unrealistic if embeddings are being created from the text.

@nelson - Can you provide me pointers to your code ? I am looking for something similar

2 Likes

@vasanth.finance
Sure are you looking to write code to build your self from scratch or using integration like Google Spreadsheets with Open API?

1 Like

Found your message! That makes sense. One of the hardest part of using large language model is to fact check against the answer. I’ll take deeper on your use case and see how to we provide better results, stay tuned.

1 Like

Hi Nelson,

great approach. Really interested, maybe you could reach out to philipp.hoellermann@dwg.de. We are thinking about “fine-tuning” the system with a lot of study materials (lots of it in PDF format) to create an online learning tutor. Do you think your solution might work into this direction?

Best, Philipp

1 Like

Hi Philipp

We could test it out and see if it will work for you use cases. Let’s chat.
Cheers
Nelson

1 Like

Any github code? This sounds amazing!

2 Likes

Is there a link so we can test ride?

Hi Raul,
Since it is fairly new and not completely polish, I’m inviting one person at a time and guide them thru the on boarding process. You do need to have an active OpenAI account setup to use it. When you ready please message me, and I will walked you thru the process. Thanks.

How are you using the Completions and embeddings together? I can’t see a way in the api to pass the embeddings to the completions endpoint? None of openai’s examples show anything similar. At the moment I’m only able to pass one page worth of content from my site via the prompt to answer questions about that specific page (before I run out of tokens due to input length), I would love to be able to do my entire site in an easier way so would love to hear how you’re doing it!

1 Like

Hi Dale,

That makes sense, the completions API only accept text and return text, the Embedding API accepts text and return most relevant embedding. You can look at this thread I have Google Spreadsheet + GPT3 - #28 by notifications, the gsheet in there demonstrate how to do this and the source code is also embedded in the App Script.

What you need to do is compare the user input embedding (done on the fly) to your database of embeddings using cosine similarity. Based on similarity scores, select the most relevant embeddings. Then go back and retrieve the original text corresponding to that embedding, and add that text to your prompt.

2 Likes

Hello! This looks super cool and similar to what I’m trying to do:

I’m trying to generate documentation based off our code and have it make updates as our code changes (without having to update the docs manually)

The thing that I’m struggling with is the required input/output. What do you enter as the output for your PDFs?

Thanks!

2 Likes

Hi Peter,

Auto generate documentation from code is very cool, I’ll love to use it myself. Also if there is way to simply ask what the code does will be very cool as well. Which Open AI model are you using to generate the documents?

Regarding to the input and output for the app, it’s all text based.

1 Like