Custom ChatGPT - How to deploy it yourself

A lot of people I talked to in the community is looking to use customize ChatGPT, so I created an app that allows you to customize your own ChatGPT using documents, videos, audio and web pages. So you can use it yourself or deploy it as a web chatbot.

Free to use at https://www.superinsight.ai or message me at linkedin.com/in/nelsonchu247 if you want to connect
Thanks.

8 Likes

Here is an example of taking custom data on a website and running GPT on top…

5 Likes

Nice Nelson. Is this using embeddings?

1 Like

Hi @Oscarr
Yes, this is using embeddings.

1 Like

perfecto, I have made a similar example.And do you know any way to make something similar with no embeddings and no sending the full text as a context? i want to say, if a person have a lot of questions about the same pdf why do you need send some embeddings each time instead of sending everything the first time and after that only ask the question? i have tried with fine tunning but looks like doesnt work well

1 Like

Hi @Oscarr,

Initially, I created a google sheet + GPT Integration to help people ask questions using data on the spreadsheet with embeddings. Google Spreadsheet + GPT3 - #24 by songsofthespheres1
This work for simple use cases. But then some of the users needed a more robust solutions, they wanted a similar solution but with a lot more custom data. So I created this app to read thru multiple documents and create many embeddings ahead of time. When someone ask a question, the app will find the best embeddings to use and include only the necessary context in the prompt. This seems to be working very well for large custom datasets.

2 Likes

I have this same type of app as a Ruby-On-Rails project on my desktop. However, I did not add a function to read PDF files (easy enough to do) and only input text indirectly from completions, direct input forms, etc.

As a Rails app, it uses a database to store all the prompts, completions, vectors (embeddings), the model used to create the vector, a “topic tag”, etc.

I also use this desktop app to compare vector searches with full-text DB searches, etc. If you do this comparison on text, you will find that vector (embedded) methods are not always optimal compared to full-text searches, especially when the length of the text is “just a few words”. For longer text, embedding vectors searches perform OK, but they fall short for short phrases and few words.

2 Likes

Agree, short phrases are not ideal. I find somewhere between 100-300 tokens are a good cut off.

2 Likes

Yeah, and that’s saying it very politely.

:slight_smile:

2 Likes

To illustrate this, I have a DB with hundreds of completions. Before, when there was an API error, I initially put “No Reply” in the DB.

Then later, when testing, for example “Hello World” as a search term:

You can see that “No Reply” is “way up there” in similarity. So later, I changed it to:

There was no reply from OpenAI. There could be many reasons for this problem. The most common problems is that the OpenAI API cannot handle the load.

…and of course, that ranks much lower:

However, if I just do a simple “Hello Word” text search in the DB I get even better results, as we mentioned.

In other topics where we have seen people discuss using embeddings with short phrases or even a single keyword and that is not a good idea (as @nelson agrees) ; but it’s easier to understand if developers test various search methods / types. Currently I test (compare and contrast) 3 search methods:

Sometimes, I test correlation (ranking) methods, just for fun:

I’m not sure about others, but it seems prudent to actually test and understand and compare search methods before “jumping all in” with embeddings for many classes of searches and use cases.

4 Likes

How did you do this? I’m paying money for something like this and no one can figure it out

Hi Drew,
It’s a combination of reading unstructured data from documents using different downstream tasks, creating a database of embeddings and using the OpenAI Completion API. Feel free to message me if you are interested in testing it out.
Cheers

Hi Nelson, I tried it out and gave you some feedback via linkedin message. I am wondering if I missed a step though, since it didn’t seem to me that the app was relying on my library. Is one’s library ready almost instantaneously? Mine appeared to be but that seems unrealistic if embeddings are being created from the text.

@nelson - Can you provide me pointers to your code ? I am looking for something similar

2 Likes

@vasanth.finance
Sure are you looking to write code to build your self from scratch or using integration like Google Spreadsheets with Open API?

1 Like

Found your message! That makes sense. One of the hardest part of using large language model is to fact check against the answer. I’ll take deeper on your use case and see how to we provide better results, stay tuned.

1 Like

Hi Nelson,

great approach. Really interested, maybe you could reach out to philipp.hoellermann@dwg.de. We are thinking about “fine-tuning” the system with a lot of study materials (lots of it in PDF format) to create an online learning tutor. Do you think your solution might work into this direction?

Best, Philipp

1 Like

Hi Philipp

We could test it out and see if it will work for you use cases. Let’s chat.
Cheers
Nelson

1 Like

Any github code? This sounds amazing!

2 Likes

Is there a link so we can test ride?