853
The AI company Perplexity is complaining their bots can't bypass Cloudflare's firewall
(www.searchenginejournal.com)
This is a most excellent place for technology news and articles.
I don't see how categorically blocking non-human traffic is irrational given the current environment of AI scanning. And what's rational about demanding cloudflare distinguish between the 'good guy' AI and 'bad guy' AI without proposing any methodology for doing so.
It is blocking human traffic, that’s the entire premise of the article.
Attempting to say that this is non-human traffic makes no sense if you understand how a browser works. When you load a website your browser, acting as an agent, does a lot of tasks for you and generates a bunch of web requests across multiple hosts.
Your browser downloads the HTML from the website, it parses the contents of that file for image, script and CSS links, it retrieves them from the various websites which host them, it interprets the JavaScript and makes any web requests based on that. Often the scripting has a user constantly sending requests to a website in order to update the content (like using web based email).
All of this is automated and done on your behalf. But you wouldn’t classify this traffic as non-human because a person told the browser to do that task and the task resulted in a flurry of web requests and processing on behalf of the user.
Summarization is just another task, which is requested by a human.
The primary difference, and why it is incorrectly classified, is because the summarization tools use a stripped down browser. It doesn’t need JavaScript to be rendered or CSS to change the background color so it doesn’t waste resources on rendering that stuff.
Cloudflare detects this kind of environment, one that doesn’t fully render a page, and assumes that it is a web scraper. This used to be a good way to detect scraping because the average user didn’t use web automation tools and scrapers did.
Regular users do use automation tools now, so detecting automation doesn’t guarantee that the agent is a scraper bot.
The point of the article is that their heuristics doesn’t work anymore because users use automation tools in a manner that doesn’t generate tens of millions of requests per second and overwhelm servers and so it shouldn’t classify them the same way.
The point of Cloudflare’s bot blocking is to prevent a single user from overwhelming a site’s resources. These tools don’t do that. Go use any search summarization tool and see for yourself, it usually grabs one page from each source. That kind of traffic uses less resources than a human user (because it only grabs static content).
so how would cloudflare tell the difference between the good 'stripped down' queries and the bad? still not hearing how that is supposed to work. if there's no way to tell the difference, the baby will be thrown out with the bathwater, and I can't blame them.
A large portion of this kind of traffic comes from identifiable sources, like Perplexity’s data centers, so Cloudflare could whitelist known safe sources. This seems to be what they’re doing now, a user replied to one of my comments saying their Cloudflare control panel now has the option of allowing AI queries from Perplexity.
Another way is to allow users to apply for session keys providing they obey rate limits and whitelist users with valid session keys. Non compliant accounts could be banned, maybe require identity verification to prevent ban avoidance.