We need source links and we need them now

Looks like M$ is not waiting to take advantage of it’s 10Billion+ investment.

The new Bing search will incorporate a ChatGPT that will include the source citations in search results.

Does anyone know if we can do the same using the OpenAI API?

1 Like

There are two possible approaches. 1. In your question, explicitly request that sources be provided. I have posted an example of using that approach. 2. Use GPT-3 as part of an NLP pipeline. There is documentation on how to do this as well as examples and tutorials.

1 Like

Thank you for the response. Where is that approach posted?

You are not kidding

Look what Bing AI can do instantly with a 1-sentence prompt

This will blow ChatGPT out of the water unless OpenAI responds quickly with similar capability

It is my understanding that Microsoft is using ChatGPT4 as the engine for it’s Bing chat. And, the Chat search results in Bing contain the source materials. So, apparently, the ability to do this is already there.

If OpenAI is truly open, I am hoping they will share with us how to accomplish the same the the API.

I mean, it should be easy. Bing/Chat has to go out and search the Internet then bring back the specific pages it cites. If we are fine-tuning a model, it will already have the source citations for it’s responses because we will have already provided them.

Not really. That’s not how language models work.

The OpenAI models are language models, not reference models.

Yes,the models “have the references” but they have been tokenized and embedded, without context.

Remember, GPT-based AI models are generative language models, not expert systems. It predicts text like an auto-completion app, but bigger.

The Bing GPT-4 announcement is about a “search engine chat frontend” which guides user to refine their searches.

So, basically, GPT-4 is an interactive chatbot for search which guides BING searches interactively.

Hope this makes sense.

See Also:

Accepting all that as true- why can’t that be done with ChatGPT?

Or is the answer that it is planned very soon for ChatGPT with GPT-4 so why bother with GPT-3 now?

GPT-4 is specially trained to perform as a search engine chatbot assistance.

I’m not following, sorry.

GPT-3 is “done and dusted”, the pre-training was done in 2021.

Not sure what your are complaining about, TBH

:slight_smile:

Not complaining - just supremely impressed by what Bing can do in this regard and hoping when ChatGPT gets GPT-4 it does the same or better.

Hi, for one, I’m not complaining at all. I have found chat GPT 3 to be extremely useful and have increased my productivity using it.

I also accept the fact that chat GPT3 is not designed to be optimized for search and functions such as returning source citations.

But, considering that chat GPT-4 appears to do just that, I am wondering if there are possibilities with chat GPT3 or will we be able to do it once chatgpt4 for is released for " open" usage?

1 Like

With ChatGPT and GPT-3 currently this Chrome extension offers a working solution. It uses the DDG browser to present results of a web search to ChatGPT as part of the prompt; the result is a pretty good approximation of how WebChatGPT would perform with native web search capability.

1 Like

I think GPT-4 does not provide the 'references` directly from the model. These come from the user’s search results, as I understand the news releases.

GPT-4 is advertised as a chatbot for guided searches on the internet, so you might be reading more into the capabilities for GPT-4 than what is available.

It’s premature to draw conclusions from marketing blurbs.

Yeah, so true.

I really wish it was possible to moderate the negative energy and complaiing here at times; but I guess that’s part of life.

It’s no wonder that some scientists fear future AIs will find humans an annoyance, LOL*5

I have used ChatGPT and the OpenAI API for around 6 weeks non-stop and written a lot of code around the API. I have found nothing to complain about and look forward to future releases and new products. Yeah, some error messages are vague, but that is typical for a beta product. Yeah, the system if overloaded. That’s a good thing. Developer can learn to trap network and API return calls and deal with errors.

Lots of folks are trying to get a text generating, auto-completion AI to perform as an expert system. GPT does not work as an expert system. GPT is not a rule-based AI. You must build components to complement GPT if you want expert system capabilities.

:slight_smile:

1 Like

The example I gave is not from an advertisement. It is from a real “chat” session I ran tonight on Bing AI Search.

1 Like

Ah. OK. What was your “one sentence prompt”?

I did not test yet, only have read the marketing releases and writing commercial code for clients. Sorry if I missed it was live for testing now, my bad, multi-tasking too much. Maybe it’s only available in the US?

I want to test with your “one sentence prompt”. Please post back with the text (no images) to copy-and-paste into Bing.

Thanks!

:slight_smile:

Well, it seems I cannot test the Bing AI Search directly as it’s not available here in my country I guess.

However, from searching the net, here is what it says:

Microsoft has gotten around some of ChatGPT’s limitations by marrying OpenAI’s language capabilities to Bing’s search function, using a proprietary tool it’s calling Prometheus. The technology works, roughly, by extracting search terms from users’ requests, running those queries through Bing’s search index and then using those search results in combination with its own language model to formulate a response

This is what thought happens. Bing searches and feeds the search resulting into Prometheus which formats the results and uses the chat bot to make the language output (the text) “pretty”.

The bulk of the work is from the search engine. The chatbot is used to make the results look and sound pretty to humans, so it seems. This is a very different process than using the AI to do the work. The search engine does the work, creates the index, etc. and sends it to some chatbot processes which do the natural language to the end user part.

See:

MIcrosoft announced a couple days ago in the US that you could sign up for a Bing AI wait list and that priority would be given to those who set Bing to their desktop browser and also downloaded the mobile Bing app.

The Text I used is:

create a pro con table from a pubmed search about epidural steroid injections

1 Like

Thanks for that. Now I see why I cannot access to test.

Cheers and thanks again. I’m off to the gym.

That is similar to the WebChatGPT Chrome extension for ChatGPT.

Of course since Bing does it behind the scenes the result seems more global/consistent with Bing.

1 Like

If this is true, man, what an AI DeepFake M$ has pulled off. It sounds similar to something we can do now with the API: context search results.

  1. You embed your content.
  2. You enter a search prompt and vectorize it.
  3. You run vector calculation between prompt and content to return the 3 highest results.
  4. You query the OpenAI model with the original search prompt and include the text of the highest results as “context” for the prompt.

I mean, I was planning on doing just this as a solution to returning source citations from AI query results. This, everybody can do now.

1 Like