perfecto, I have made a similar example.And do you know any way to make something similar with no embeddings and no sending the full text as a context? i want to say, if a person have a lot of questions about the same pdf why do you need send some embeddings each time instead of sending everything the first time and after that only ask the question? i have tried with fine tunning but looks like doesnt work well
Hi @Oscarr,
Initially, I created a google sheet + GPT Integration to help people ask questions using data on the spreadsheet with embeddings. Google Spreadsheet + GPT3 - #24 by songsofthespheres1
This work for simple use cases. But then some of the users needed a more robust solutions, they wanted a similar solution but with a lot more custom data. So I created this app to read thru multiple documents and create many embeddings ahead of time. When someone ask a question, the app will find the best embeddings to use and include only the necessary context in the prompt. This seems to be working very well for large custom datasets.
I have this same type of app as a Ruby-On-Rails project on my desktop. However, I did not add a function to read PDF files (easy enough to do) and only input text indirectly from completions, direct input forms, etc.
As a Rails app, it uses a database to store all the prompts, completions, vectors (embeddings), the model used to create the vector, a “topic tag”, etc.
I also use this desktop app to compare vector searches with full-text DB searches, etc. If you do this comparison on text, you will find that vector (embedded) methods are not always optimal compared to full-text searches, especially when the length of the text is “just a few words”. For longer text, embedding vectors searches perform OK, but they fall short for short phrases and few words.
Agree, short phrases are not ideal. I find somewhere between 100-300 tokens are a good cut off.
Yeah, and that’s saying it very politely.
![]()
To illustrate this, I have a DB with hundreds of completions. Before, when there was an API error, I initially put “No Reply” in the DB.
Then later, when testing, for example “Hello World” as a search term:
You can see that “No Reply” is “way up there” in similarity. So later, I changed it to:
There was no reply from OpenAI. There could be many reasons for this problem. The most common problems is that the OpenAI API cannot handle the load.
…and of course, that ranks much lower:
However, if I just do a simple “Hello Word” text search in the DB I get even better results, as we mentioned.
In other topics where we have seen people discuss using embeddings with short phrases or even a single keyword and that is not a good idea (as @nelson agrees) ; but it’s easier to understand if developers test various search methods / types. Currently I test (compare and contrast) 3 search methods:
Sometimes, I test correlation (ranking) methods, just for fun:
I’m not sure about others, but it seems prudent to actually test and understand and compare search methods before “jumping all in” with embeddings for many classes of searches and use cases.
How did you do this? I’m paying money for something like this and no one can figure it out
Hi Drew,
It’s a combination of reading unstructured data from documents using different downstream tasks, creating a database of embeddings and using the OpenAI Completion API. Feel free to message me if you are interested in testing it out.
Cheers
Hi Nelson, I tried it out and gave you some feedback via linkedin message. I am wondering if I missed a step though, since it didn’t seem to me that the app was relying on my library. Is one’s library ready almost instantaneously? Mine appeared to be but that seems unrealistic if embeddings are being created from the text.
@vasanth.finance
Sure are you looking to write code to build your self from scratch or using integration like Google Spreadsheets with Open API?
Found your message! That makes sense. One of the hardest part of using large language model is to fact check against the answer. I’ll take deeper on your use case and see how to we provide better results, stay tuned.
Hi Nelson,
great approach. Really interested, maybe you could reach out to philipp.hoellermann@dwg.de. We are thinking about “fine-tuning” the system with a lot of study materials (lots of it in PDF format) to create an online learning tutor. Do you think your solution might work into this direction?
Best, Philipp
Hi Philipp
We could test it out and see if it will work for you use cases. Let’s chat.
Cheers
Nelson
Any github code? This sounds amazing!
Is there a link so we can test ride?
Hi Raul,
Since it is fairly new and not completely polish, I’m inviting one person at a time and guide them thru the on boarding process. You do need to have an active OpenAI account setup to use it. When you ready please message me, and I will walked you thru the process. Thanks.
How are you using the Completions and embeddings together? I can’t see a way in the api to pass the embeddings to the completions endpoint? None of openai’s examples show anything similar. At the moment I’m only able to pass one page worth of content from my site via the prompt to answer questions about that specific page (before I run out of tokens due to input length), I would love to be able to do my entire site in an easier way so would love to hear how you’re doing it!
Hi Dale,
That makes sense, the completions API only accept text and return text, the Embedding API accepts text and return most relevant embedding. You can look at this thread I have Google Spreadsheet + GPT3 - #28 by notifications, the gsheet in there demonstrate how to do this and the source code is also embedded in the App Script.
What you need to do is compare the user input embedding (done on the fly) to your database of embeddings using cosine similarity. Based on similarity scores, select the most relevant embeddings. Then go back and retrieve the original text corresponding to that embedding, and add that text to your prompt.





