SEO becomes AAO - autonomous agent optimisation?

At the heart of pretty much every ‘auto gpt’ solution is the ability to query the web to gather more information. But the web is untrusted data. Search engines have since the beginning of the internet been dealing with adversarial actors stuffing keywords and performing ‘search engine optimisation’.

Everything is new right now and so folks aren’t too sophisticated at attacking this yet, but I can imagine a particular scenario.

Imagine for example an instantiation of ‘autonomous agent’ becomes wildly popular, say one that researches things to put into an amazon dropship store.

Next, imagine someone figures out how to fingerprint this agent web crawler and respond with specifically crafted responses that the AA and its helper GPT api can read (eg, white text on white background, or maybe just in the javascript comments, you get the picture).

Finally, using the AA, they test run it a bunch of times to confirm that their attack indeed works and causes the AA to behave according to their own agenda, rather than that of the AA owner/runner.

Even more nefarious - imagine the AA has more capabilities than just aggregating / summarizing / reporting information. I won’t say the quiet part out loud here, but let’s just say they better be sandboxing the thing.


Of course, these types of services already exist (and are even being gamed), they just don’t use GPT AI.

The main difference I believe that comes with AAO is twofold - people are more and more outsourcing their thinking. The whole idea behind AAs is to get it to do all the reasoning for you. This is probably why you see a lot of ‘As an AI language model’ in search results everywhere these days. The people using GPT4 to do these things aren’t thinking very hard or looking too closely at their work.

For this reason, I encourage folks to think less about the GPT APIs usage for replacing reasoning, but rather for amplifying it.

Another difference is that central AIs that everyone uses, also becomes a sort of single point of failure. Admittedly, I suspect all LLMs will likely fall prey to the same problems that OpenAI’s models do. Maybe the problem isn’t so much central AIs, but rather the common autoregressive transformer architecture itself.

1 Like

This begs a product:

Submitters will pay a nominal fee to have their data included in these libraries. Agent users will be able to download the libraries for free, based on their interests.

The kyc is the value add - keeps out spam / prompt injection.

We’re in an interesting transitioning period.

There are popular social media platforms that already utilize “honeypot” traps to trick bots into thinking that they are reading and acting on actual information.

But, knowledge libraries have existed, well, since libraries.

1 Like

Yeah, for sure, these knowledge libraries exist but they are of little value because Google is the auto agent that everyone uses and everyone does SEO around that. There’s little value in getting your product / whatever included in these things.

But AAO could be the future of search, replacing SEO. Giving GPT4 and an objective and access to a knowledge library, it could use higher level reasoning to find and filter out the product you want.

Some knowledge libraries could be reverse - you have to pay to access them. For example, a product review knowledge library.

The reason I’ve been thinking about this is because I have been thinking about creating something like this, but I keep wondering about the spam problem. It’s far off / theoretical, but it’s inevitable.

Could SEO not happen with your library as well?
How could you possibly avoid it?

1 Like

Well know your customer / kyc stuff would keep out bad actors, fraud and spam. SEO would exist, but now we’d have GPT4 which would be optimized through AA logic to filter out the SEO, something Google doesn’t yet do.

It’s possible that Google may produce this - AA searching. It’s an interesting idea. They have all the tools, but do they have trust? You’d want to upload a lot of personal info so that Google would be able to give you better results. I mean, you’d want to upload absolutely everything. Something like that might be better done on your local computer.

But it really begs the question - where will AA’s be executed? In the cloud? One argument for the cloud is that sometimes storing info on your local computer can actuallly be less secure than storing it in the cloud.

Another problem with Google providing this, is that it totally upends their biz model. Right now they show results based on whoever pays the highest price. That won’t be the future.

It’s kind of an interesting and rather fundamental fact that probably has gone on waaaaay too long. Products that rank high should rank high on merit tailored to your use case rather than how much $$ it is in it for the data provider.

Totally. It’s one of their main points when they ask if they can “learn” to deliver better results (ads).

Locally storing information is a headache for the user, and the service. Compared to cloud storage anyways. The file is in a predictable environment.

Is this true? Google has their sponsored results, for sure.
The actual results AFAIK do not have any manipulation outside of SEO.

I would think that any sort of search engine would be susceptible to SEO.

1 Like

Well, it’s why I prefer the term AAO because it’s not a dumb search engine. It’s something like GPT4 with higher level reasoning capabilities. But yes, AAO will definitely come into play and it will be an ever escalating battle. Maybe the true test of GPT AI in the future will just be figuring out who are sincere service providers.

I use the product use case as I believe that’s where 90% of Google’s revenue comes from. People selling a product / service and than bidding on keywords to market them.’

Btw, google isn’t all bad. It does do some optimization based on click thrus * keyword bid to ensure relevance.

What’s the difference between a dumb and smart search engine?

Higher level reasoning capabilities? Could you be a little more specific?

1 Like

I’m trying to understand the comparison of SEO and search engines to AI or in particular Large Language Models, who’s “wisdom” is from the crowds and massive amount of text data they are trained on predicting.

For example, the LLM is basically an extrapolation engine. Feed it some input, and it will extrapolate to provide the completion (aka the output). But SEO and search is based on correlation. Which is a totally different operation, that certainly can be gamed (with SEO). But extrapolation and correlation are very different operations.

Also, “AA” or AI Agents are essentially multi-level extrapolation agents in their information gathering stage. But in the coelesce/summary stage, you correlate and then apply some logic. So this “knowledge” gen uses both correlation and extrapolation.

I’ve been messing with these ideas in my CurtGPT agent. Here is a sample output of its research when I ask it to compare and contrast two poplular mindfulness gurus. It goes through this whole chain of reasoning. This reasoning can’t be easily gamed unless the AI foundation model (core LLM) is fundamentally flawed. If this were the case, nobody would trust it, and it would lose mind share to another model.

Asking a new question to CurtGPT.

Answer the question: What is the difference between the teachings of Jon Kabat-Zinn and Eckhart Tolle?

In today’s fast-paced society, the teachings of Jon Kabat-Zinn and Eckhart Tolle have become increasingly relevant and valuable. Both emphasize the importance of mindfulness and living in the present moment, offering practical techniques and spiritual guidance to help individuals cope with stress, improve mental health, and find inner peace. Despite their shared goal, their approaches and specific teachings differ, reflecting their unique backgrounds and influences.

Jon Kabat-Zinn’s Mindfulness-Based Stress Reduction program combines Eastern mindfulness practices with Western scientific and medical knowledge, offering a structured and evidence-based approach to cultivating mindfulness. Through various techniques such as meditation, body scans, and yoga, MBSR has been shown to effectively reduce stress, enhance emotional regulation, and improve overall well-being.

In contrast, Eckhart Tolle’s teachings focus on spiritual awakening and transcending the egoic mind. Drawing from various spiritual traditions, Tolle highlights the importance of living fully in the present moment and cultivating an ongoing state of presence and awareness. By doing so, individuals can access a deeper sense of peace, happiness, and spiritual growth.

Critics of Kabat-Zinn’s MBSR program may argue that it focuses too heavily on technique and structure, potentially limiting the spontaneity and natural flow of mindfulness practice. They may also question the program’s emphasis on scientific validation, suggesting that it may detract from the more spiritual aspects of mindfulness.

Conversely, critics of Tolle’s teachings may argue that his emphasis on spiritual awakening and transcending the thinking mind can appear abstract and difficult to apply in everyday life. They may also criticize the lack of structured guidance and specific techniques, making it challenging for individuals to develop a consistent mindfulness practice.

Despite these criticisms, both Kabat-Zinn’s and Tolle’s approaches offer valuable insights and tools for cultivating mindfulness, improving mental health, and fostering inner peace. Ultimately, the most effective approach may vary depending on the individual’s needs, preferences, and personal beliefs. By exploring and integrating the teachings of both Kabat-Zinn and Tolle, individuals can develop a well-rounded and personalized mindfulness practice that supports their overall well-being and personal growth.


Here is an example. All I did was blindly copy/paste in the amazon details for both products and asked for a table of the pros/cons. Obviously a ‘dumb search’ engine can’t do this.

I’m not going into the implementation details of an autonomous agent here, but rest assured, there are much much more sophisticated things you can do, especially if you have appropriate knowledge bases of both the products, reviews, usage scenarios, and customer information. Critcally, you put GPT4 in a loop and use it to plan tasks that can be executed (ie, an external web crawler), use techniques of self-refinement, and only stopping once you’ve hit a particular objective.

1 Like

If you are basically expanding the AI’s information via “web crawling”, you are correlating on untrustworthy data and therefore making the response untrustworthy. Is that your concern?

1 Like

Yeah, that’s exactly what I discussed above. Knowledge libraries as a product with KYC can help deal with these issues.

OK sure. But that’s why we use embeddings. We control the truth through them (we create the database of trusted facts). Web crawling is suspect, and yeah, totally untrustworthy.

Go embeddings!


Not sure who ‘we’ is in this scenario, but for everyone to re-invent the wheel of creating knowledge libraries doesn’t seem super optimal. Do you want to maintain a list of impact screwdrivers on the off chance you might buy one some day?

In your example, to avoid recreating the wheel, the “we” would be some sort of vendor that has an API to their “screwdriver database”.

But some companies would do this themselves to be in control of the information. They only sell 5 brands of screwdrivers, and can upload and embed this data in their own databases (if they were motivated).

This sounds like your “mindmap” link, where it is more individualized. But you are thinking it could be broadened and sold through an API, which is true as well. But from experience, I would rather be in control of the data.

But your outsourcing idea is good too, since people have limited time to think about this stuff. So both versions (individualized truth) and (curated and sold truth) have a place for sure.

I can see a market for this!


Well, that happened fast.