The example I gave is not from an advertisement. It is from a real “chat” session I ran tonight on Bing AI Search.
Ah. OK. What was your “one sentence prompt”?
I did not test yet, only have read the marketing releases and writing commercial code for clients. Sorry if I missed it was live for testing now, my bad, multi-tasking too much. Maybe it’s only available in the US?
I want to test with your “one sentence prompt”. Please post back with the text (no images) to copy-and-paste into Bing.
Well, it seems I cannot test the Bing AI Search directly as it’s not available here in my country I guess.
However, from searching the net, here is what it says:
Microsoft has gotten around some of ChatGPT’s limitations by marrying OpenAI’s language capabilities to Bing’s search function, using a proprietary tool it’s calling Prometheus. The technology works, roughly, by extracting search terms from users’ requests, running those queries through Bing’s search index and then using those search results in combination with its own language model to formulate a response
This is what thought happens. Bing searches and feeds the search resulting into Prometheus which formats the results and uses the chat bot to make the language output (the text) “pretty”.
The bulk of the work is from the search engine. The chatbot is used to make the results look and sound pretty to humans, so it seems. This is a very different process than using the AI to do the work. The search engine does the work, creates the index, etc. and sends it to some chatbot processes which do the natural language to the end user part.
MIcrosoft announced a couple days ago in the US that you could sign up for a Bing AI wait list and that priority would be given to those who set Bing to their desktop browser and also downloaded the mobile Bing app.
The Text I used is:
create a pro con table from a pubmed search about epidural steroid injections
Thanks for that. Now I see why I cannot access to test.
Cheers and thanks again. I’m off to the gym.
That is similar to the WebChatGPT Chrome extension for ChatGPT.
Of course since Bing does it behind the scenes the result seems more global/consistent with Bing.
If this is true, man, what an AI DeepFake M$ has pulled off. It sounds similar to something we can do now with the API: context search results.
- You embed your content.
- You enter a search prompt and vectorize it.
- You run vector calculation between prompt and content to return the 3 highest results.
- You query the OpenAI model with the original search prompt and include the text of the highest results as “context” for the prompt.
I mean, I was planning on doing just this as a solution to returning source citations from AI query results. This, everybody can do now.
Hi @ruby_coder. I wonder if you have any insight about the number of tokens from the search results that are being included in the prompt? That’s been one of the trickier parts of building an expert system in my experience, and I am wondering if we can expect an announcement soon that GPT-3’ts oken limit has been increased. I agree with your point below that the combination of search and completion in a web browser is incredibly powerful. Thanks.
Here’s a hacky heuristic for fact checking:
- Submit your prompt and get a reply.
- Copy/paste individual facts into Google. Your prompt in (1) could have specified a format to make parsing easier to automate.
- Search Google on each fact. Programmatically if possible.
- Submit the fact check from Google results into a GPT prompt that basically asks the same original question, but has a better context because presumably the Google search result is “accurate”.
Clunky and probably in violation of somebody’s site terms.
So yeah, I agree. Microsoft and Google: Please automate this fact checking process!
The world will be so much better and this technology will be adapted so much faster.
I watched this webinar discussion: Beyond Semantic Search with OpenAI and Pinecone - YouTube
The model they demo’d here is perfect. You do vector searches (on embedded data), you get back the top results with sources from the documents you indexed. And, users can select which groupings of those documents they wish to choose. Sweet. This was developed a year ago!
When I first posted this 3 months ago, I was fairly clueless about the chat completion process. Since then, I’ve learned that process and coded a few completion chains. Now that I understand the process better, I also understand how citations can be included in responses – at least with respect to semantic searches using your own data.
First, you embed your data:
Then you build your chat completion chain:
So, in this process, a user asks a question. That question is embedded and submitted to your vector store for a similarity search. The search returns relevant info (docs relevant to the question asked). You then send the question + relevant info to the LLM model in a chat completion API call for an answer.
Your citations are essentially the relevant info you send. So, you only need ask the model to list the titles of the relevant info you sent it – or, better yet, in your chain code, you list them along with the answer you send to your user.
These graphics are from this excellent LangChain quickstart tutorial video: LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners - YouTube