Hello, I have a website where I have a library of prompts. I want that the user has an option to open, via a link, the chatgpt chat, preopulating the first prompt in a new chat. How can I do that? I’ve been digging through the docs, the internet, and asking chatgpt… but i have found no solution for this. Thanks in advance for any help in pointing me in the right direction.
I’m not sure if you can pass the initial prompt via the URL. However you can copy the prompt to their clipboard and open chat.openai.com, when the user clicks.
I’m doing it on php. For each prompt i have in my library i have a variable that captures de prompt, it is called “$prompt”. The farthest i’ve come is to pass the content from “$prompt” to the api and generate it in a new page of my website. But I need it to behave differently, I need to open chatgpt with the first question/propmpt prepopulated…
As I have read, you could pass content via the url in the past, but not now. Thanks for your suggestion, I will try to make it work though I think it won’t behave as I need.
One approach - create a custom UI instead of using OpenAI’s demo. A custom UI offers many advantages. I made one in Firebase hosting in about half a day, and it has all sorts of features like the one you want.
I also used CustomGPT to build the embeddings - it allowed me to create a very specific ChatGPT experience in record time.
Feel free to play with it.
Superb! But it is not what I’m looking for… what I want to achieve (I guess) is far easy…btw, I don’t understand what you mean with “using OpenAI’s demo”…
It’ll work. The user will still have to paste the content in the text box though.
Yes, maybe I didn’t explain myself rightly: as u say this will work, but I do not want the user having to paste the content… I thought maybe with the api this was possible, but maybe is not possible. Thanks again for ur answers!
ChatGPT is not a perfect solution for every use case. It is a demo in as much as it is generalized for hundreds of millions of users. We have assumed this is a comprehensive solution to all use cases; it’s not. If that was the intention of OpenAI, they wouldn’t have an API ready to build custom experiences.
Okay - think it through:
- You already have a web app.
- The users are already in that app.
- The app already has the library of prompts.
- The app needs to make it seamless for the users to execute those prompts.
- Ergo, YOUR “app” needs to have an integrated ability to perform exactly what ChatGPT does.
If you build your current app with an integrated GPT feature, no one will have to copy or paste anything.
Hello Bill, I see your point. Think on mine: if I keep chatGPT out of my webapp, I can save api tokens for testing the prompts of my library. I already have an integrated solution (not aesthetically nice…) but now I want to see if I can do it the other way. Thanks for your answers! I will try it… maybe I can just print the first answer to the prompt, not the entire conversation…
Another option, create a Chromium extension (Chrome, Edge, ??) that users install. Its an extra step for them, but I think it could solve your problem cleanly.
In your app, you’d take the prompt want to prepopulate, and append it to the
open ai url before you redirect users there, like
Nice one… but you will still have to install an extension, and not everybody likes to do that… thanks!
I’m not sure I understand this token-saving strategy.
Well, if you use it at your site, you spend tokens in every iteration with the chat, right? But if the user has a chatgpt account, he must have one, the iterations would continue in the ChatGPT website, ain’t? Maybe I’m wrong in my approach…
That’s correct. There are three basic pathways to provide GPT services to your customers.
Provide all GPT processes in your app and absorb the inferencing costs while charging fees that earn a profit. Your customers would not need their own OpenAI account.
Force your customers to each get their own OpenAI accounts and either use their API keys in your app or force them out of your app and into ChatGPT.
Create a GPT Plug-in that seamlessly performs what your app does. In this approach your customers would need their own OpenAI accounts.
I don’t think there’s any wrong way to implement AI services. They just need to be done in a manner that supports your business case.
My business case is a human-curated library of resources in Spanish for ChatGPT, which is published on my personal site. The idea is to generate leads from potential clients. However, it is not a paid service, which is why I do not want to use the API directly. Anyway, although I haven’t found a solution through this forum, I’m learning a lot! thankss
Understood. These business requirements should be stated up front before asking about technical approaches. If you’ve followed my commentary in this community and my other writings here and here, I often try to extract business requirements before addressing technical methods that might help.
In your business, the goal is to optimize lead capture using AI. It may make sense to absorb such costs to give your prospects a seamless experience because the alternatives to business development are also more expensive.
If you force every prospect to have an OpenAI account to benefit from your service, you significantly constrain an ocean of potential candidates, right? You are creating accessibility friction to your valuable service.
One way to determine if using your tokens to track new clients will be financially practical if to:
- Develop a prediction of new clients you gain with and without the AI component.
- Develop the true lifetime value of a single new client.
- Determine the estimated number of queries each visitor will consume (this is where your rate limitor will be handy).
- Determine the estimated number of queries required to acquire a new client.
- Compare the cost of acquiring new clients without AI and with AI.
Unless you are willing to perform these five steps, you will be operating with incomplete data to make decisions concerning the use of AI in your business.
Thanks Bill… I know I’m making a decision based on my beliefs more than on practical data… I know it, and I appreciate your thoughts… but my first question has no answer, so I will assume it is not possible at the moment with the openai api.
This is unrelated to the OpenAI API, so perhaps you are referring to a different question. Maybe restating it, knowing everything we now realize, might be worthwhile. I have not found any upper bounds to the possibilities of the API.
Ok. I will take this as an answer to my question and I will continue looking for a solution.