Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
The author demonstrated that the challenge can be solved in 17ms however, and that is only necessary once every 7 days per site. They need less than a second of compute time, per site, to be able to send unlimited requests 365 days a year.
The deterrent might work temporarily until the challenge pattern is recognised, but there's no actual protection here, just obscurity. The downside is real however for the user on an old phone that must wait 30 seconds, or like the blogger, a user of a text browser not running JavaScript. The very need to support an old phone is what defeats this approach based on compute power, as it's always a trivial amount for the data center.
That's counting on one machine using the same cookie session continuously, or they code up a way to share the tokens across machines. That's now how the bot farms work
It will obviously depend heavily on the type of bot crawling, but that is not hard coordination for harvesting data for LLM's, as they will already have strategies to prevent nodes all crawling the same thing - a simple valkey cache can store a solved JWT.
but the vast majority of crawlers don't care to do that. That's a very specific implementation for this one problem. I actually did work at a big scraping farm, and if they encounter something like this,they just give up. It's not worth it to them. That's where the "worthiness" check is, you didn't bother to do anything to gain access.
Please tell me how you're gonna un-obscure a proof-of-work challenge requiring calculation of hashes.
And since the challenge is adjustable, you can make it take as long as you want.
You just solve it as per the blog post, because it's trivial to solve, as your browser is literally doing so in a slow language on a potentially slow CPU. It's only solving 5 digits of the hash by default.
If a phone running JavaScript in the browser has to be able to solve it you can't just crank up the complexity. Real humans will only wait tens of seconds, if that, before giving up.
This here is the implementation of sha256 in the slow language JavaScript:
You imagined that JS had to have that done from scratch, with sticks and mud? Every OS has cryptographic facilities, and every major browser supplies an API to that.
As for using it to filter out bots, Anubis does in fact get it a bit wrong. You have to incur this cost at every webpage hit, not once a week. So you can't just put Anubis in front of the site, you need to have the JS on every page, and if the challenge is not solved until the next hit, then you pop up the full page saying ‘nuh-uh’, and probably make the browser do a harder challenge and also check a bunch of heuristics like go-away does.
It's still debatable whether it will stop bots who would just have to crank sha256 24/7 in between page downloads, but it does add cost that bot owners have to eat.