A lot of people I talked to in the community is looking to use customize ChatGPT, so I created an app that allows you to customize your own ChatGPT using documents, videos, audio and web pages. So you can use it yourself or deploy it as a web chatbot.
perfecto, I have made a similar example.And do you know any way to make something similar with no embeddings and no sending the full text as a context? i want to say, if a person have a lot of questions about the same pdf why do you need send some embeddings each time instead of sending everything the first time and after that only ask the question? i have tried with fine tunning but looks like doesnt work well
Initially, I created a google sheet + GPT Integration to help people ask questions using data on the spreadsheet with embeddings. Google Spreadsheet + GPT3 - #24 by songsofthespheres1
This work for simple use cases. But then some of the users needed a more robust solutions, they wanted a similar solution but with a lot more custom data. So I created this app to read thru multiple documents and create many embeddings ahead of time. When someone ask a question, the app will find the best embeddings to use and include only the necessary context in the prompt. This seems to be working very well for large custom datasets.
I have this same type of app as a Ruby-On-Rails project on my desktop. However, I did not add a function to read PDF files (easy enough to do) and only input text indirectly from completions, direct input forms, etc.
As a Rails app, it uses a database to store all the prompts, completions, vectors (embeddings), the model used to create the vector, a “topic tag”, etc.
I also use this desktop app to compare vector searches with full-text DB searches, etc. If you do this comparison on text, you will find that vector (embedded) methods are not always optimal compared to full-text searches, especially when the length of the text is “just a few words”. For longer text, embedding vectors searches perform OK, but they fall short for short phrases and few words.
In other topics where we have seen people discuss using embeddings with short phrases or even a single keyword and that is not a good idea (as @nelson agrees) ; but it’s easier to understand if developers test various search methods / types. Currently I test (compare and contrast) 3 search methods:
I’m not sure about others, but it seems prudent to actually test and understand and compare search methods before “jumping all in” with embeddings for many classes of searches and use cases.
Hi Drew,
It’s a combination of reading unstructured data from documents using different downstream tasks, creating a database of embeddings and using the OpenAI Completion API. Feel free to message me if you are interested in testing it out.
Cheers
Hi Nelson, I tried it out and gave you some feedback via linkedin message. I am wondering if I missed a step though, since it didn’t seem to me that the app was relying on my library. Is one’s library ready almost instantaneously? Mine appeared to be but that seems unrealistic if embeddings are being created from the text.
Found your message! That makes sense. One of the hardest part of using large language model is to fact check against the answer. I’ll take deeper on your use case and see how to we provide better results, stay tuned.
great approach. Really interested, maybe you could reach out to philipp.hoellermann@dwg.de. We are thinking about “fine-tuning” the system with a lot of study materials (lots of it in PDF format) to create an online learning tutor. Do you think your solution might work into this direction?