SearchGPT: More Than Just a Perplexity Clone?

At first glance, SearchGPT appears to be a carbon copy of Perplexity. The main differences are subtle: SearchGPT tends to display more images, and there are some minor design variations. However, both platforms could significantly improve their citation methods.

Curiously, both SearchGPT and Perplexity follow a similar pattern: they present a paragraph of information, followed by a list of references with links to sources at the end:


While this approach has its merits, a more granular citation system would be far more beneficial. Ideally, each specific statement would be linked directly to the relevant excerpt from the source material. Take Google’s Featured Snippets as an example. They use the #:~:text= parameter in URLs, making it easier for users to locate the exact information they’re looking for:

While this level of detail may not always be necessary, it would be incredibly useful in many cases. It would help users verify information and reduce concerns about potential hallucinations or inaccuracies introduced by the LLM.

By implementing more precise citation methods, SearchGPT could set itself apart and provide a more trustworthy user experience.

In essence, I envision a citation system where references are seamlessly integrated throughout the text, rather than clustered at the end of paragraphs. Moreover, when appropriate, links should direct users to specific excerpts that support each claim. While this approach is undoubtedly more complex and goes beyond simple summarization, it’s the standard I would expect from a search engine developed by a leader in AI technology like OpenAI.

4 Likes

Thanks for sharing your feedback. Will definitely be interesting to see where things go…

1 Like

One thing I’d like to point out is that Perplexity has been caught numerous times ignoring robots.txt and even spoofing their user agent to bypass anti-bot security. This is a massive no-no and will only encourage tighter security and potential regulations surrounding web scraping. For example, Reddit has officially blocked ALL bots except for Google, and I think very soon almost all providers will follow the same strategy.

In fact, some websites have gone as far as introduce data poisoning. I can’t find it but some people are placing fake articles such as “How to introduce a data connector for your orange” :rofl:

Warning, there is some colorful language found in these articles:

7 Likes

So, nothing to do with the $60m a year licensing deal from Google? :wink:

Google might not be crawling Reddit but using its API as part of the licence deal.

https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/

Let’s face it, Google wants competitive advantage.

At the same time I’ve noticed traffic to one of my competing forums (Reddit is essentially a behemoth forum competitor - with an atrocious user interface) collapse.

Google is behaving in an anti-competitive way imho and is clearly biasing search results to Reddit!

It is also telling people how to make pizza with glue based on the serious information held on Reddit. Lol

5 Likes

:joy:

Google and Reddit really do seem to be in eachothers pockets. There’s some really good articles released by ex-google engineers regarding Google’s apparent decline in search quality. It basically revolves around prioritizing statistics over everything and not caring about that “human touch”.

One instance I recall was not bothering to perform auto-spell check because it causes the user to search twice. Like, really? Just petty number games.

The fact that people would always append “reddit” to their search was a symptom of prioritizing garbage results. Apparently once you hit any executive level you completely stop caring, or forget about diminishing returns and longevity of product. It seems to all be about meta-gaming the company with flashy numbers and promises.

In regards to Reddit: I remember when they completely pulled the rug out of API users. One of the most used iPhone apps for Reddit (Apollo) was suddenly hit with potentially owing millions of dollars, and even was slandered!

They made that actual Reddit app worse, and instead of thriving in a fair game they just killed off the competitors and then destroyed a lot of mod tools with no remorse.

I quit Reddit for since that, since the day of the blackout. I was so hopeful that people would care, but they didn’t.

:rofl: :rofl: It burns a very strong fire in myself seeing how AI is completely regurgitating what people write for profit without passing ANY of the money downwards. SOMEONE needs to make a reddit-like community that pays its users for their content and also packages it for resale to AI. The internet NEEDS to be locked down at this point as it’s VERY clear that everybody and their grandma is lapping up any content they can for free.

Kids are going to be so surprised to hear that we once upon a time would join a community and put ourselves out there just to ask a question, and then wait a day or so for a response :joy:

4 Likes

The challenge here is you need to have content out there for discovery, ie SEO … it’s a tough balance.

2 Likes

Personally, I have found myself using Google for simple lookups. Company names (so many extensions these days). Then I find myself using ChatGPT for knowledge-based questions.

Fun fact: Did you know .ai is the Internet country code top-level domain (ccTLD) for Anguilla, a British Overseas Territory in the Caribbean.)

:rofl: When you’re in the right place at the right time.

I do find SEO to be a dying industry. I don’t even bother putting it on my websites. It’s just been gamified to a point of uselessness. Unfortunately the best SEO these days is astroturfing communities.

For whatever reason if “bob_is_my_uncle” says in a Reddit post that he uses “xTreme Product” and then gets 3 upvotes and a “same” from “super_ninja_turtle” it MUST be some amazing stuff.

Then, now with this sweetheart deal it will be the first result somehow :rofl: “bob_my_uncle” should get a trophy for best sales representative.

1 Like

:rofl:

Guess the Reddit bosses are happy to have the bots, because they fake engagement for them too?!

Yes, and with generative AI it really is the nail in the coffin on semantic web search being an effective differentiator (alone).

1 Like

Clear sourcing would be nice. But I’m seriously doubtful they’d like to indicate how much they’re borrowing from original authors in their summaries.

Linking just the domain name will already help some of the partners that they have content publishing partnerships with:

I noticed that Perplexity is also following suit with their own Perplexity Publishers’ Program just announced.

my opinion is this: there will be no such company as perplexity. openai copies successful projects built on its base and implements it into its systems. it will soon take all the startups for itself!

2 Likes

I would like to expand on the toppic that you mentioned about Google. Currently in Thailand is worried about the problem of e-comearch from China like in this article.

"Currently, human perception is controlled by algorithms. You will be shown selective information that matches your current needs or activities, often unintentionally. Searches, however, will display intentional results based on your actual needs. Everything is influenced by the amount of money spent on ads. Even if you’re not planning to buy anything, commercial content is always shown first.

The arrival of commercial platform, which heavily advertise, will block our awareness of other related products. You will see fewer results from existing stores unless you search with exact names, and even then, finding them may require scrolling.

How do we live in a world where money controls our perception more than our actual desires.”

It’s not just commerce. Even health, I noticed that searches for dyslexia had changed a few days ago. Because I’ve been searching for it most often since I found out and got checked. Searches I usually enter as a single word But there are also mixed words related to treatment, remediation, and even screening. But it doesn’t have the desired content because it only has content about children. Until a few days later it was discovered that dyslexia thinking appeared in the search. It made me want to know how much information I was controlling. In addition, there is also the problem of creating fake content to spread information such as The spread of black chin fish with additional names and headlines to make the first search content lighter. It takes advantage of the Google system.

It also includes politics. From a discussion that discusses administrative mistakes with financial knowledge People on one side believe in their own opinions and try to say that the information they have is correct. When I searched for it, I found that 2 sides of information were being displayed together. But his response was as if he believed in only one side of the information.

Later I told the story to friends who work in marketing and IT. The brief summary was that it came from the bias of the personalized algorithm, but I couldn’t figure out the details. I’m thinking that AI that doesn’t have this effect must be controlled in some way. in order to access user information

If the Chinese government controls the news through power, we call it a dictatorship. We have news control like Google, which everyone can do equally according to the money they pay in a capitalist way.

I’m actually interested in the inherent distortion of data. Academic impact knowledge and culture And observe how the use of humans with AI will affect in which direction. I will mainly use AI from OpenAI. Recently, I found that DALLE3 created copyrighted images without my request. So I know what caused it. Luckily it was a picture and it was noticeable quickly. It’s within the scope of this problem and I’ve already reported on OpenAI. I can tell you it’s not as bad as Google’s because DALLE3 doesn’t know if its creation is right or wrong. Even GPT likes to offer some feedback to guide its thinking. But it is normal for this to happen. But unlike Google, I remember you saying that you would not use Google’s AI (can’t remember the details), but I know that Google is forcing abnormal perception on humans. They have developed AI to see pictures and put Ability to present products to the store and ask if you would like to purchase the product It is about setting up tools to guide people’s thinking.

4 Likes

Agreed.

These tools, even ChatGPT are simply convenience. A convenience that tears us further away from being social. Having a single friend, a single role model. I mean, dang, mom and pops are both working now and just don’t have the time. Which is strange, considering they have all of these higher-level functions that are supposed to give us more time :thinking:.

It’s a serious hive-mind that I think will be the subject of some very interesting studies.

It’s ironic, really. People love to throw these words at countries like China but when given the baton do the exact same things they hate. This irony runs rampant.

The issue, in my opinion is having all content fed through a singular point of control. Google, ChatGPT, Facebook. They all “act” like they are just simply providers of information, but they are curators. Pay money, get visible. So, of course, my content needs to make me money to even consider paying it.

As much as AI enthusiasts cry about it, algorithms will never have a touch of humanity. Clickbait, morbid curiosity, negative emotional invocations. All top contenders because they invoke the worst emotions out of us. I hate seeing negative news. I need to see a good ending. We are driven this way by poetic justices. If a bad guy is winning, we will desperately need to see them fail. I will click all news just to see the results and hope that my infatuation can come to a satisfied end.

If we are completely depending on our tape measures, this content does wonderful.

So, Google now is destroyed by their failed algorithm. Resulting in “reddit” being appended to most results. Desperately trying to keep up in the race they threw out a glue-cheese pizza recommending AI. They are desperate to retain their numbers and have become so engorged that their greed and unsustainable hunger has resulted in eating their own hand to fill their own belly.

This is the inevitable future that all AI companies will reach, and we’ll see the claws come out. When the roots of all that makes humanity suck start to finally dig it’s way into these singular points of control to heavily manipulate the masses.

What’s so different from someone deciding to trust and follow an influencer? They trust, they follow, they mimic, they believe.

These distortions in data and bias will become evident & rampant in humanity.

4 Likes

I noticed it and compare them clearly enough. But it’s more than that: I’ve seen people scouring junk content in search of a magical prompt that will make their wish come true.

Humans should learn for example that AI learns better from audio than text. Because it receives more than one information, but we are so used to it that we ignore it. It just learns the same way as us.

I’ve been away from here for several months. And I’m still a person who has language problems and doesn’t know how to code. But I understand it better. The more expected the use, the more useful the output. AI has never created anything other than what’s in our minds. If only our questions were clear. The more powerful the answer will be.

Some of the things I do might seem childish and pointless. Today, I haven’t even pushed a single AI to its limits. Today, tools like image editing, such as upscaling and background removal, have become almost the only framework for creating images with DALL-E. Today, I’ve bypassed the directional and size limitations of vision and developed text-to-blueprint. But now, I see people around me switching between various AIs, spending more just to enhance their capabilities.

Looking back, they’re becoming dependent on AI, while on some days, I hardly use it. They aren’t just prey to one predator but to many—from developers, marketers, platform providers, and even online courses. For instance, on the day GPT-4 was launched, influencers competed to create content, damaging their own societies to serve the technology. When AI systems crash or when blue screens appear, it becomes a major issue. SEO has turned into RLHF (Reinforcement Learning from Human Feedback).

On the day when both sides of the coin were revealed, people still chose to see only one side as usual. Meow!!!

SearchGPT seems promising, but I agree that more detailed citations would really make a difference. Linking each statement to its specific source would boost trust and make it easier to verify facts, just like Google’s snippets. It’s a small change, but it could set SearchGPT apart from Perplexity and other AI tools.

Also, I hope they focus on minimizing hallucinations in the results—accuracy is key for user confidence. Looking forward to seeing how SearchGPT evolves!