Embedding a GPT in website

Does anyone know how i can embed / integrate a custom GPT on a Wordpress website in the form of a chatbox? Without users needing to login to an OpenAI account?

Use the API?
Use a Wordpress plugin and specify the GPT?

Thanks!

4 Likes

Not sure about Wordpress, but what I’m using is Flask in my web implementation. Currently privately hosted, so not sure how well it will scale, since we’re using it mostly to demo to others as well as test out mobile scaling.

1 Like

Thanks Shamair! I’ll look into that. We wouldnt need big scale necessarily but want to find a clean way of doing it. But cant really find any documentation from OpenAI yet on this… ?

Not sure about Wordpress plugin, but using Assistant APIs you could integrate the same functionality in any frontend form factor.

1 Like

Highly doubt you can do this with the new feature. In fact, just using the GPT (even if you share publicly) requires the user to have ChatGPT Plus (so only users with the $20 plan can access the GPTs right now)

3 Likes

Thanks @engagepy and @alden ! I was hoping to have both the GPT on the (coming) store and on my own website, but this could be a good work around indeed.

Would GPTs and Assistant API basically work the same in terms of model use?

Testing in assistant playgrounds it felt like it was very much relying on the uploaded retrieval docs and if it couldnt find anything there it would stop and not use general got 4 knowledge…

@sfvp Yes GPTs and Assistant API do work the same way functionally and call on the same models.

You can prompt your Assistant to create or guess information if not found in file. That is an easy tweak. Ahead you can even ask it to search online or use documents uploaded, this is certainly possible today. May require some tweaking of system prompts and settings to achieve.

However, please note that GPTs use does not show up in your API usage dashboard, double check to confirm the same. APIs do show up in usage and are billed. So while functionally both can serve the same purpose, but to build a custom interface & consume this technology on another platforms APIs is the way to go, for now.

Excited to see where you take it, do share it once ready :slight_smile:

Thanks very much @engagepy ! Thats good to know. I think I’ll just create both the assistant API and a separate public GPT then and use all the same instructions and retrieval docs.

I’ll certainly share the final product here with explanations and such!

And then we’ll see if it can ever be combined in the future. Exciting times!

Thanks again!

1 Like

I would recommend doing the public GPT first and getting that right – before digging into the Assistants API. That will give you a clear deployment path.

If you want, you can share your public GPT here so that we can check it out. (Also: There might be simpler ways to achieve what you want to achieve :slight_smile: )

2 Likes

I’m also looking to embed a custom GPT into my website. What easier way are you talking about?

Thanks @alden ! Will do that, just waiting on access now :slight_smile:

With the new GPT’s and Assistants features being beta I doubt he’s got them added yet, but the “AI Engine” plugin by Meow apps sounds like it could do what you need. I’m sure once the assistants are out of beta they could be incorporated, but this is a great plugin to test GPT in your site with.

Thanks @cl0ud6uru !

That looks like a good plugin indeed, the pro version also has a “learn extra knowledge” mode similar to retrieval they call embeddings.

I am not sure how good this tech is though compared to native OpenAI retrieval within the new Assistants API… any thoughts??

I only used that plugin for a few weeks when I first started getting into this stuff. I learned very quickly I’d need to learn to code my own stuff do get it do what I want. This was mostly GUI related, but it quickly turned into “I need to be able to code all of it” lol. So I’ve been using Langchain to code my own agents and such. Just started messing around with the new API last night. I’d suggest coding up a few assistants and testing their functionality, then comparing them against other options. There are a lot of tutorials for coding simple bots on youtube. Thankfully I’ve been a SysAdmin/Engineer for quite a few years, and was good with Powershell. So learning python wasn’t too much of a hassle.

When comparing my langchain agents that use vector stores with semantic search for RAG against the new Assistants API, I liked the new Assistants responses better. Unfortunately with the new API we’re limited to 20 files per assistant. My RAG agent is using way more than 20 docs since I used our SoP documentation to build an SoP bot. If they change the 20 file limitation, and the cost isn’t too much, this could be more of an elegant solution.

2 Likes

@cl0ud6uru thats probably a good strategy indeed :slight_smile: I feel most plugins and extra layers just create noise and you lose control. Maybe i can hire you at one point haha.

In my case, the bot will use 75% of native GPT4 knowledge and add 25% of specific data to the outputs. So my feeling is that staying with Assistants API / GPTs will in the end offer more natural results, also consider the speed in which OpenAI is innovating…

1 Like

They are innovating quite fast, but it’s a double edge sword. I spent a few months learning and coding up this “SoP Bot” just for OpenAI to release the same features natively. Once they up the 20 file limit I’ll be able to remove a good chunk of my code. Either that or I’ll be learning to chain 10 assistants together, each with 20 docs a peace. Have one assistant that can call a function to list the assistant to use based on the user query, then pass the query to it using the same thread :rofl:

1 Like

Haha yes those are the days we are living in, trying to stay ahead of the AI :joy:

Thanks @alden !

I was also looking at your plugin, looks very good!
But am i right in saying it doesnt “blend” my own content with native gpt4 internet knowledge in its outputs?

99% of our customers DO NOT want to blend the content. In fact, anti-hallucination is our most required feature. For the remaining 1%, we have an option to blend the content. It’s called “My Content + ChatGPT”.

I think popular plugins, frameworks and solutions will continue to be absorbed by OpenAI. Any successful technology stack iterates over and eventually absorbs most popular 3rd party solutions.

1 Like