Economics of Public-Facing AI

I am curious regarding thoughts on the economics of public-facing AI projects

No question the capabilities of GPT-3 are stunning - much more than its (less expensive to use) predecessors.

No question I can imagine all sorts of uses for GPT-3 on a personal basis or on an in-house basis for a company.

But I am puzzled regarding whether at-scale public-facing AI projects will succeed simply given the economics when every single search or interaction with an AI API costs money. I doubt advertising income will exceed costs for most projects. There is also a risk of a bot executing large numbers of (costly) queries - especially on any website that does not require login.

Sure there will be site run by big companies who can bear the risks/ costs. But looking at the huge number of AI upstarts recently it’s hard for me to imagine many of them surviving once economic reality is apparent.

I sure hope I am wrong. Tell me why.

1 Like

I think the real opportunity is staying away from public-facing chatbots to some degree.

However, if the bot is not public facing but offered to a controlled group of users by way of a login of some sort, you can limit the exposure and risk. This may be an association, or a closed forum etc

If you put a system in place to count tokens, you could set a threshold where you turn the bot off (possibly make it disappear completely) when the limit has been reached for a month.

Many early adopters have used ChatGPT as a tool - especially while it continues to be free. However, it can’t really be trained. But when the API for ChatGPT is released, we may be able to combine it with embedding (which is cheap) to give the final answer.

I don’t know how sites offering unlimited text for a one-off lifetime fee operate. They may not survive. Or they may need to change their business model.

Another idea is that you don’t offer it as a bot. Instead, you offer it as a Q&A tool. (one shot, not back and forth conversation) It may take the load off front desk staff (especially in highly trained medical and legal areas). Even though it will cost, you can get the human staff onto more productive tasks. The trick here will be ring-fencing the system so it can’t be abused. You need to ensure it only answers the questions it is intended for.

Finally, I think the real opportunity with OpenAI is in knowledge retrieval and consumption in a B2B (or business) situation. It would also be suitable for researchers. In this case, the focus is on low-volume - but highly valued responses. having a way to trawl through thousands of legal documents or research papers to come up with targeted responses is where it is at.

Automation is another area where AI will help.

Once again, it is reducing cost or resource requirements. Reducing costs in one area or increasing efficiencies allows you to spend more on the technology. It all comes down to cost vs. benefit.

If a bot can convert people from leads into high-paying customers, it doesn’t really matter how much it costs. The high-paying customers will make it worth while. But the only way to find this out is to try in a controlled manner and measure your conversion rates. With appropriate training, you should be able to get a bot to get rid of users who will be lost leads early in the conversation (in a nice way - as they may come back as a high-paying customers in the future) and concentrate on the potential actual sales.

Maybe you don’t want the bot to do everything. Maybe it filters down the conversation and sends an info pack out at the end of the chat (based on the outcome). Dumb bots are cheaper to run - but then you probably don’t need AI, and there are many other options out there.

Anyway, that was my 5 cents worth.

1 Like

Yes I agree re: B2B applications.

For sure it will be useful for professional purposes to assist with information-gathering - I have done that myself and seen notable uses in medicine, law, and IT/coding for so far. No doubt there are other options.

Your other comments are helpful - thanks. But can you clarify - does embedding help in a chatbox situation? I thought that is mostly intended for uses-cases where the goal is classifying input into categories. Can embedding help if the goal is a freeform Q/A session?

You can use embedding with chat to give the bot a persistent memory. You can also use it to supply knowledge at various stages of the conversation

Once you get to know your data, you can set a bottom threshold for the dot product. On each step of the conversation, you can do a semantic search of a knowledge source or a chat history for the visitor

If the hit is above your threshold value, you can assume it is relevant to the conversation and include it as a context before you generate an answer

You would have to experiment a bit to find the optimum cutoff value