Feedback on the New Search Feature in GPT

I’ve been using GPT’s new search feature (Search GPT), and it has proven to be an incredibly useful and efficient tool. Previously, we could ask GPT to “search the web” directly, but now this integration makes it much more practical and straightforward. The way results are presented, with a clear and concise summary along with references, enables faster and more effective searching. Integrating the search directly into the chat platform significantly enhances the user experience.

Here are some suggestions that might further improve this tool:

  1. Customized News: It would be great to receive news tailored to my personal interests, allowing me to view a summary before reading the full article.
  2. Voice Command for Search: For the advanced voice mode, a keyword trigger would be highly beneficial. This way, GPT could recognize that I’m requesting a search without needing to press the search button. It would make the experience even more seamless.
  3. Favorites Section: Adding a section for favorite topics or searches could be valuable. This wouldn’t be exactly a chat memory but a specific memory for searches, making it easier to revisit frequently accessed topics.
  4. Organization in the Sidebar: The sidebar is functional, but the navigation experience could be enhanced with a bit more structure. For example, creating separate sections for chats, search results, and different GPT models would make organizing and accessing each type of content easier.

Overall, these are my initial impressions after the first day of use. This is a great integration in GPT, and I look forward to seeing how the search feature continues to evolve and improve.

1 Like

my experience with the new search feature for academic purposes is not good. For instance, it cannot provide accurate information regarding the latest publications of a journal. The search is full of hallucinations, providing wrong authors or titles of the papers. if asking to list the latest papers in astro-ph it gets right only the first 5 papers, and then fully hallucinates the rest. Interesting enough, generating a pdf from the website and parsing it gives always accurate results.

A search utility that is full of hallucinations is probably the worse you can do for a search. No information is better than wrong information.

1 Like

Apparently the problem arises because chatGPT doesn’t actually look into a website. This is the reply I get from chatGPT when asked about these hallucinations:

No, I can’t literally “read” or parse the exact content of a live website directly. Instead, I generate responses based on summaries or snippets from search results, not from direct access to the full content of web pages. This indirect approach is why I can sometimes introduce inaccuracies or “hallucinate” details if the search snippets are incomplete or misleading.

With PDFs and documents you upload, I can extract and interpret exact information, which makes responses from these sources much more reliable.