I honestly think the opposite. I think robots.txt will soon be enforceable by law and websites like the wayback machine will be beaten into submission. Who would want other, usually paid-for services, to leech off their content? Instead I think they’ll open paid-for API endpoints to gather and have a legal right to the data. Unless you’re the product, you will need to pay.
I mean, look at Facebook’s code. ReactJS already does a pretty good job of preventing simple scraping, but they also use many honeypots, obfuscations, hidden tricks, and even an AI to catch bots.