this post was submitted on 26 Nov 2025
403 points (96.8% liked)

Selfhosted

53222 readers
1539 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Got a warning for my blog going over 100GB in bandwidth this month... which sounded incredibly unusual. My blog is text and a couple images and I haven't posted anything to it in ages... like how would that even be possible?

Turns out it's possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for 'Unknown robot'? This is actually bonkers.

Edit: As Thunraz points out below, there's a footnote that reads "Numbers after + are successful hits on 'robots.txt' files" and not scientific notation.

Edit 2: After doing more digging, the culprit is a post where I shared a few wallpapers for download. The bots have been downloading these wallpapers over and over, using 100GB of bandwidth usage in the first 12 days of November. That's when my account was suspended for exceeding bandwidth (it's an artificial limit I put on there awhile back and forgot about...) that's also why the 'last visit' for all the bots is November 12th.

you are viewing a single comment's thread
view the rest of the comments
[–] hoshikarakitaridia@lemmy.world 152 points 3 days ago (2 children)

Fucking hell.

Yeah and that's why people are using cloudflare so much.

[–] artyom@piefed.social 123 points 3 days ago (2 children)

One corporation DDOS's your server to death so that you need the other corporations' protection.

[–] MaggiWuerze@feddit.org 82 points 3 days ago

basically protection racket

[–] muffedtrims@lemmy.world 29 points 2 days ago (1 children)

That's a nice website you gots there, would be ashame if something weres to happen to it.

[–] Agent641@lemmy.world 6 points 2 days ago (1 children)

We accidentally the whole config file

[–] some_kind_of_guy@lemmy.world 3 points 2 days ago

Somebody set up us the bomb

[–] Lee@retrolemmy.com 60 points 3 days ago (1 children)

A friend (works in IT, but asks me about server related things) of a friend (not in tech at all) has an incredibility low traffic niche forum. It was running really slow (on shared hosting) due to bots. The forum software counts unique visitors per 15 mins and it was about 15k/15 mins for over a week. I told him to add Cloudflare. It dropped to about 6k/15 mins. We excitemented turning Cloudflare off/on and it was pretty consistent. So then I put Anubis on a server I have and they pointed the domain to my server. Traffic drops to less than 10/15 mins. I've been experimenting with toggling on/off Anubis/Cloudflare for a couple months now with this forum. I have no idea how the bots haven't scrapped all of the content by now.

TLDR: in my single isolated test, Cloudflare blocks 60% of crawlers. Anubis blocks presumably all of them.

Also if anyone active on Lemmy runs a low traffic personal site and doesn't know how or can't run Anubis (eg shared hosting), I have plenty of excess resources I can run Anubis for you off one of my servers (in a data center) at no charge (probably should have some language about it not being perpetual, I have the right to terminate without cause for any reason and without notice, no SLA, etc). Be aware that it does mean HTTPS is terminated at my Anubis instance, so I could log/monitor your traffic if I wanted as well, so that's a risk you should be aware of.

[–] MinFapper@startrek.website 15 points 2 days ago (2 children)

It's interesting that anubis has worked so well for you in practice.

What do you think of this guy's take?

https://lock.cmpxchg8b.com/anubis.html

[–] Lee@retrolemmy.com 1 points 51 minutes ago

Is there a particular piece? I'll comment on what I think are the key points from his article:

  1. Wasted energy.

  2. It interferes with legitimate human visitors in certain situations. Simple example would be wanting to download a bash script via curl/wget from a repo that's using Anubis.

3A) It doesn't strictly meet the requirement of a CAPTCHA (which should be something a human can do easily, but a computer cannot) and the theoretical solution to blocking bots is a CAPTCHA.

and very related

3B) It is actually not that computationally intensive and there's no reason a bot couldn't do it.

Maybe there were more, but those are my main takeaways from the article and they're all legit. The design of Anubis is in many respects awful. It burns energy, breaks (some) functionality for legitimate users, unnecessarily challenges everyone, and probably the worst of it, it is trivial for the implementer of a crawling system to defeat.

I'll cover wasted energy quickly -- I suspect Anubis wastes less electricity than the site would waste servicing bot requests, granted this is site specific as it depends on the resources required to service a request and the rate of bot requests vs legitimate user requests. Still it's a legitimate criticism.

So why does it work and why am I a fan? It works simply because crawlers haven't implemented support to break it. It would be quite easy to do so. I'm actually shocked that Anubis isn't completely ineffective already. I actually was holding out bothering testing it out because I had assumed that it would be adopted rather quickly by sites and given the simplicity in which it can be defeated, that it would be defeated and therefore useless.

I'm quite surprised for a few reasons that it hasn't been rendered ineffective, but perhaps the crawler operators have decided that it doesn't make economic sense. I mean if you're losing say 0.01% (I have no idea) of web content, does that matter for your LLMs? Probably if it was concentrated in niche topic domains where a large amount of that niche content was inaccessible, then they would care, but I suspect that's not the case. Anyway while defeating Anubis is trivial, it's not without a (small) cost and even if it is small, it simply might not be worth it.

I think there may also be a legal element. At a certain point, I don't see how these crawlers aren't in violation of various laws related to computer access. What i mean is, these crawlers are in fact accessing computer systems without authorization. Granted, you can take the point of view that the act of connecting a computer to the internet is implying consent, that's not the way the laws are, at least in the countries I'm familiar with. Things like robots.txt can sort of be used to inform what is/isn't allowed to be accessed, but it's a separate request and mostly used to help with search engine indexing, not all sites use it, etc. Something like Anubis is very clear and in your face, and I think it would be difficult to claim that a crawler operator specifically bypassed Anubis in a way that was not also unauthorized access.

I've dealt with crawlers as part of devops tasks for years and years ago it was almost trivial to block bots with a few heuristics that would need to be updated from time to time or temporarily added. This has become quite difficult and not really practical for people running small sites and probably even for a lot of open source projects that are short on people. Cloudflare is great, but I assure you, it doesn't stop everything. Even in commercial environments years ago we used Cloudflare enterprise and it absolutely blocked some, but we'd get tons of bot traffic that wasn't being blocked by Cloudflare. So what do you do if you run a non-profit, FOSS project, or some personal niche site that doesn't have the money or volunteer time to deal with bots as they come up and those bots are using legitimate user-agents coming from thousands of random IPs (including residential! -- it used to be you could block some data center ASNs in a particular country until it stopped).

I guess the summary is, bot blocking could be done substantially better than what Anubis does and with less down side for legitimate users, but it works (for now), so maybe we should only concern ourselves with the user hostile aspect of it at this time -- preventing legitimate users from doing legitimate things. With existing tools, I don't know how else someone running a small site can deal with this easily, cheaply, without introducing things like account sign ups, and without violating people's privacy. I have some ideas related to this that could offer some big improvements, but I have a lot of other projects I'm bouncing between.

[–] pipe01@programming.dev 7 points 2 days ago (1 children)

I wouldn't be surprised if most bots just don't run any JavaScript so the check always fails

[–] Lee@retrolemmy.com 2 points 47 minutes ago

It could be, but they seem to get through Cloudflare's JS. I don't know if that's because Cloudflare is failing to flag them for JS verification or if they specifically implement support for Cloudflare's JS verification since it's so prevalent. I think it's probably due to an effective CPU time budget. For example, Google Bot (for search indexing) runs JS for a few seconds and then snapshots the page and indexes it in that snapshot state, so if your JS doesn't load and run fast enough, you can get broken pages / missing data indexed. At least that's how it used to work. Anyway, it could be that rather than a time cap, the crawlers have a CPU time cap and Anubis exceeds it whereas Cloudflare's JS doesn't -- if they did use a cap, they probably set it high enough to bypass Cloudflare given Cloudflare's popularity.