Agree, short phrases are not ideal. I find somewhere between 100-300 tokens are a good cut off.
Yeah, and that’s saying it very politely.
![]()
To illustrate this, I have a DB with hundreds of completions. Before, when there was an API error, I initially put “No Reply” in the DB.
Then later, when testing, for example “Hello World” as a search term:
You can see that “No Reply” is “way up there” in similarity. So later, I changed it to:
There was no reply from OpenAI. There could be many reasons for this problem. The most common problems is that the OpenAI API cannot handle the load.
…and of course, that ranks much lower:
However, if I just do a simple “Hello Word” text search in the DB I get even better results, as we mentioned.
In other topics where we have seen people discuss using embeddings with short phrases or even a single keyword and that is not a good idea (as @nelson agrees) ; but it’s easier to understand if developers test various search methods / types. Currently I test (compare and contrast) 3 search methods:
Sometimes, I test correlation (ranking) methods, just for fun:
I’m not sure about others, but it seems prudent to actually test and understand and compare search methods before “jumping all in” with embeddings for many classes of searches and use cases.
How did you do this? I’m paying money for something like this and no one can figure it out
Hi Drew,
It’s a combination of reading unstructured data from documents using different downstream tasks, creating a database of embeddings and using the OpenAI Completion API. Feel free to message me if you are interested in testing it out.
Cheers
Hi Nelson, I tried it out and gave you some feedback via linkedin message. I am wondering if I missed a step though, since it didn’t seem to me that the app was relying on my library. Is one’s library ready almost instantaneously? Mine appeared to be but that seems unrealistic if embeddings are being created from the text.
@vasanth.finance
Sure are you looking to write code to build your self from scratch or using integration like Google Spreadsheets with Open API?
Found your message! That makes sense. One of the hardest part of using large language model is to fact check against the answer. I’ll take deeper on your use case and see how to we provide better results, stay tuned.
Hi Nelson,
great approach. Really interested, maybe you could reach out to philipp.hoellermann@dwg.de. We are thinking about “fine-tuning” the system with a lot of study materials (lots of it in PDF format) to create an online learning tutor. Do you think your solution might work into this direction?
Best, Philipp
Hi Philipp
We could test it out and see if it will work for you use cases. Let’s chat.
Cheers
Nelson
Any github code? This sounds amazing!
Is there a link so we can test ride?
Hi Raul,
Since it is fairly new and not completely polish, I’m inviting one person at a time and guide them thru the on boarding process. You do need to have an active OpenAI account setup to use it. When you ready please message me, and I will walked you thru the process. Thanks.
How are you using the Completions and embeddings together? I can’t see a way in the api to pass the embeddings to the completions endpoint? None of openai’s examples show anything similar. At the moment I’m only able to pass one page worth of content from my site via the prompt to answer questions about that specific page (before I run out of tokens due to input length), I would love to be able to do my entire site in an easier way so would love to hear how you’re doing it!
Hi Dale,
That makes sense, the completions API only accept text and return text, the Embedding API accepts text and return most relevant embedding. You can look at this thread I have Google Spreadsheet + GPT3 - #28 by notifications, the gsheet in there demonstrate how to do this and the source code is also embedded in the App Script.
What you need to do is compare the user input embedding (done on the fly) to your database of embeddings using cosine similarity. Based on similarity scores, select the most relevant embeddings. Then go back and retrieve the original text corresponding to that embedding, and add that text to your prompt.
Hello! This looks super cool and similar to what I’m trying to do:
I’m trying to generate documentation based off our code and have it make updates as our code changes (without having to update the docs manually)
The thing that I’m struggling with is the required input/output. What do you enter as the output for your PDFs?
Thanks!
Hi Peter,
Auto generate documentation from code is very cool, I’ll love to use it myself. Also if there is way to simply ask what the code does will be very cool as well. Which Open AI model are you using to generate the documents?
Regarding to the input and output for the app, it’s all text based.
Thanks! I’m following the OpenAI docs and sticking to davinci 3 for now until I get it right.
What I’m unsure about is how I “ingest” the data without having to also submit the results like in the examples in the docs.
I think what I need to do is use the Files api to upload all the functions, upload the current state of the docs and ask “which functions need updating” but that’s been the hard part.
Thanks!




