Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
The post is a bit low on details, but I strongly suspect this is a victim of AI scraping.
Well, they've also been maintaining the software since 2005. They said why they're closing shop, so why not take their words at face value? They have no obvious reason to lie.
Many of us have started and maintained projects and then moved on when our lives changed. That is just normal.
Yes and the reason they state sounds a lot like AI scraping made hosting public services such a PITA that they lost motivation to continue doing it. Lots of long running projects that used to require very little maintainance are now DDOSed by these scrapers.
It really doesn't seem like that's the case. It doesn't even makes much sense. What do tou think was being AI scrapped? The source code?
It makes a lot of sense. Both the git repos that they hosted and things like a RSS feed-reader are things that are the prime target for AI scrapers and the same time quite database query heavy on the backend so that the scraping really has a big impact on the costs of running these services.
And yes source-code is among what is the most targeted data to ingest by AI scrapers, mainly to train coding assistants but apparently it also helps LLMs to understand logic better.
First, source code is on github.
Second, RSS aggregators are self hostable, not a service provided by the dev. The dev would have not issues of a public instance of ttrss hosted by someone gets scrapped.
Third, RSS aggregators doesn't really tend to be public facing. Due to their personal nature they don't tend to be open. They are more account based.
Sorry, I really don't see the case here.
What? They explicitly talk about shutting down their self-hosted infrastructure which includes two git services and other targets of AI scraping. Did you even read the post?
They are closing the whole project.
Specifically they say that they are tired of pushing fixes and that they don't find excitement in maintaining the project. With zero mentions at all to being scrapped or having any kind of AI related issue.
I don't know if you knew the project before seeing this post. I did, I was considering between this and freshrss and chose freshrss specifically because I knew that the end of ttrss was close (this was like 2 years ago). There were a lot of signs that the development was ending and the project was on route to be abandoned.
No, they are shutting down their publicly hosted infrastructure and say that their project is "finished" anyways, so it doesn't matter that much as a justification. But the main point about the post is the public facing infrastructure and how they lost motivation to run it.