Economic Incentive for AI to Get Worse

I want to talk today about the incentive AI companies have for making their product worse, not better. We all talk about the challenges that could make AI get worse, but the incentive for it, is a theory that I think merits more discussion.

FREE TRAINING:
Firstly, they can dumb down software as you query, and force you to react with corrections. This equates to free training for them. If they build a new model, they may have you training it without your knowledge. Perhaps they breadcrumb you with a few good responses, and then you are bewildered as you need to hold AI’s hand for everything you ask, even simple questions. You’ve now been employed by them for free.

DUMBER = PURCHASING MORE SPECIALIZED PLATFROMS
Dumbing down their product can pigeonhole you into buying other products that their investors are also going to profit from. Maybe OpenAI used to be really great at code, literature, and image generation. Now it may be perceptibly worse, and you are likely paying for three models that do these tasks well in isolation, instead of one that does them all. It extrapolates your finances and forces you into multiple subscriptions, just like if Netflix were to split your favorite shows across different streaming channels.

Finally, the founders and investors will be sure to join regulatory communities and advocate for regulation of AI because they will hedge any competition by so doing. Anyone who tries to build models without their blessing won’t be able to, and those who build models that weaken their prospects of pigeonholing consumers into various subscription models will be stopped. It will become collusion-ridden and inefficient, and the regulation they claim will fix it, will be the nail in the coffin that makes certain it will never change for the better.

4 Likes

To be on topic, let’s think simpler.
Reduce quality and costs, increase net profit.
These are economic incentives; in general.

2 Likes

Good for devs as you can build better and more specialized tools to solve this issue of the default tools available to most of the people

2 Likes

Yeah, more specialized can often be preferred. I suppose those who have specific needs would prefer paying for specific platforms. Also a generalized platform could just be more confused by hyper-specific requests, and thus it would be harder to train.

Also yes, there are resources to build better tools. It’s a good point.

This is true. But not always. Quality can drive consumer behavior. Although once something becomes popular and commonplace enough, maybe there is a point at which quality is curbed. I’ve read many people claiming AI performs worse than before. But there is also incentive for consumers to complain like this, because can spark further innovation from companies and people will take the things they consume regularly for granted.

1 Like

It’s unclear whether AI services fit into a free market model.

However, if we apply Adam Smith’s classic theory of supply and demand determining prices, we might expect the following:

Due to short supply and excess demand, prices would increase, or quality would decrease to meet the excessive demand.

I also believe that innovation rarely occurs, nor does it happen simply because there are incentives.

1 Like

Yes but most devs are not data scientists and are working with existing models. If all available models fall into the same pattern of reducing cost / quality there’s not too much dev can do about it?

1 Like

I would say it depends… But then I totally realize that having linguistics background makes my opinion biased.

On the other hand, nothing prevents devs to build teams where they would collaborate with data scientists (sometime an over kill in business apps), linguists, domain experts, etc., to get a highly specialized tool with great results.

Agreed. Are you working on any projects at the moment? I have been building exofi.app which provide fully voices AI companions and assistants. I would love to team up with some Data scientists and linguistics experts to help improve the language and specifically multi-lingual capabilities.

1 Like

Sure would be cool to catch up and discuss. Here is what I’m working on these days:

1 Like

Like how Apple won’t make a touchscreen Mac until they need an economic boost.

1 Like

If I remember correctly, this is called ensh1ttification.

Just like how Guilded died way back when, Roblox forced them to ensh1ttify themselves to get more people on Roblox, and instead of working the platform died.

And a lot of companies are doing this.

1 Like

Well said. If innovation is more spontaneous then, does decay behave similarly?

:sweat_smile: a great term. How best to prevent it? I guess you can make your own tools and models, but even still how to stop it from happening to anyone’s traction-generating ideas?

Impressive. With your skillset and experience, do you find that AI products are actually getting worse in some ways? For instance the property management software you work with, does it make more mistakes, or require more fine-tuning to maintain similar levels of initial functionality?

So, withhold now, because too much of a good thing all at once isn’t quite as appealing. A practice which perhaps we and even monkeys engage in to build social equity and establish our place in a hierarchy? Especially developers who wish to display their accurate worth. Why not AI companies and their integrations as well?

Fair. Devs nay not be able to do much about it, but perhaps what they will do is profit off of it. With patterns of reducing cost / quality might come an increase in dev demand. If there are true incentives for decay at all, outside of the purely theoretical, then maybe some incentives are more concerned with preservation than greed.

Not really, I already replied to a similar question in this form. It might be personally me (take it as strongly biased opinion):

When I look at how models perform (good or bad), I mostly see the cases where humans naively expect the “magic box” work, without taking a breath and trying to digest the idea that LLMs are basically “language transformers” - a sort of assembler that puts language in its correct form using not the language rules (as it seems) but the probabilities of words following each other… It’s close to what the speech region of the brain does: converts flow of concepts in language and back, but a light version of it where yet little logic is present but rather “probabilities on steroids”.

Very useful to do simple tasks where operations with words and meanings don’t go too deep in logic but rather stay limited in form changes or stay within “probabilistic match” of ideas.

But how can you expect it to perform high level tasks with 100% precision?

I think we are at the point where further improvement of models in their current form is at “plateau” and further training/parameters increase will only make them better in words manipulations but not in problem solving. So yes, the new versions may seem to perform worse if what you ask them is solve your entire workflow in one step because they are getting more specialized in what their design dictates: language operations.

On the other hand, when you take the model for what it is, you see why this thing behaves like this and where the output grows from. Then you kind of see how to bypass those limitations with other tools, and even you start foreseeing them beforehand and design the whole thing not even get into that trouble.

So what I do, I use code to build workflows and do the logic to solve the problem, and use LLMs to help the code to access the meaning of the content you operate on and/or do simple (and constrained) tasks that classic code cannot solve without AI.

Works pretty good so far.

The management software does not use AI. The websites using that software, do use some of my AI tools for some of the features.

Other tools mostly (not that often, actually) need additional fine-tuning either to learn handling edge cases I have not included in the original design. Or, most probable, because I see that training newer models (often on the same dataset) cuts my operation costs, as they are cheaper.

Many good thoughts are mentioned, but the logic can go both ways. The incentive to dumb down might make sense if there were nowhere else to go. However, competition is key. As long as switching to a better provider is relatively easy, the major players will need to keep improving to survive. Whether it involves general or specialized models, competition remains crucial. This doesn’t mean we should chase every new trend, but leveraging the available resources across commercial models will keep commercial research and development active. If models don’t improve, they will become obsolete. I anticipate continued evolution toward better models. The biggest risk is being confined to a single provider or model, which would diminish the incentive to improve.

1 Like