Reimagining AI Search: Fair Compensation and Verified Information

Hi everyone,

I’m in the early stages of designing a new kind of AI search alternative — one that goes beyond traditional web search and addresses its fundamental flaws.


:magnifying_glass_tilted_left: Problem with Today’s Web Search

Current AI web search tools (e.g., Bing Search, browsing agents) face two major issues:

  1. Biased or incomplete information:
    Most search engines only surface a narrow slice of the web, filtered by SEO, popularity, or technical constraints like robots.txt.
  2. No compensation for original content providers:
    Creators and writers whose content is quoted or used in AI outputs receive nothing — even if their work drives value downstream.

:light_bulb: My Vision

I’m designing a new information ecosystem, where:

  • Content providers are compensated when their information is used by AI tools
  • Data is verified, curated, and fact-checked — not blindly scraped
  • Developers can plug into this system via a clean API, enabling fairer and higher-quality AI search experiences

:hammer_and_wrench: Current Status

This is still in the concept phase, and I’m exploring use cases, infrastructure models, and community needs.
No demo yet — but I’d love feedback, questions, or collaboration ideas.


:handshake: Let’s Rebuild AI Search — Together

Would love to hear:

  • Does this idea resonate with your experience building LLM apps?
  • Have you faced challenges around data sourcing, reliability, or fairness?

Thanks for reading!

HI and welcome to the developer forum.

Couple of questions on your proposal:

  1. Payments to providers:
    Who is funding the payemnts, where does the initial funding come from?
    Who decides the value of the content provided, is that negotiated or based on per byte?
    Who mediates when data provided by one party is claimed by another as their original work?

  2. Data is verified:
    Who is doing the verification?
    How is their impartiality ensured?
    If one group claims some content or data is invalid, do they get to remove the verified status?
    Do governments get to redact state secrets from the data, do security agencies get to approve the data prior to it being verified?

  3. Developer API:
    Who maintains the API?
    Who pays for the servers the API runs on?
    Do develoeprs pay for searches? Bulk discounts? Do you think you can be competitive with Google or Bing given your currated and checked data?

1 Like

Thanks again for the great questions! Let me clarify where things stand and how I envision this project moving forward.

This project is currently in the concept and PoC (proof-of-concept) stage. If things progress well, I plan to develop it as a startup and seek VC funding to support both infrastructure and content compensation.


:money_with_wings: Compensation for Content Providers

The core idea is to create a system where, when data is cited or used by an AI, the provider (or original publisher) receives a small payment.
The exact pricing model is still open for discussion, but this redistribution of value is a key goal. I also believe it’s important that the company behind the platform handles legal responsibilities, so AI developers don’t need to navigate complex copyright or compliance issues individually.


:white_check_mark: Data Verification

I’m currently exploring two potential paths:

  • Using AI-assisted tools to validate and summarize content
  • Or working with trusted third-party reviewers for critical domains

That said, data verification is still an open question, and I’d love to hear thoughts or suggestions on this part. It’s one of the hardest problems, but also one of the most important.


:toolbox: Developer API and Use Case

If funded, the API infrastructure would be maintained and operated using VC-backed resources.
The value proposition is not to compete with general-purpose search engines like Google, but rather to provide access to data that Google/Bing often can’t or won’t index — especially content behind paywalls, blocked by robots.txt, or deliberately curated for quality and trust.

:bar_chart: For example:

  • Roughly 26% of the top 1,000 websites block GPTBot entirely
  • Some smaller e-commerce sites have had to block GPTBot after it caused high load spikes — in effect, acting like a denial-of-service
  • In the broader search engine space, over 15% of websites explicitly block crawlers, calling into question the reliability of the public web as an information infrastructure

Let me know what you think — especially around verification and fair compensation models.
I’m building this with developers and information providers in mind, so all feedback is welcome.

This appears to be entirly AI generated content.

Is their a person behind it or is this created by a bot?

1 Like

Thanks for asking — yes, there’s a person behind this
I’m a university student in Japan, and English isn’t my first language, so I use AI tools (like ChatGPT) to help express my thoughts more clearly.

I’m genuinely exploring this idea and really appreciate the thoughtful questions and feedback. I’m here to learn, improve, and hopefully build something meaningful.

1 Like