Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
we plan on having as many "web of trust" like features as possible at some point, like for example you could get recommended content / communities that users you upvote participate in, this can be implemented easily, and it's very open and P2P.
but in our opinion it's not technically possible to have moderation/discovery that is fully web of trust for a few reasons:
you need to bootstrap from somewhere, you can't just start "syncing/downloading" content. randomly, and then start manually liking/disliking stuff to build your personal web of trust from scratch. people dont want to download gbs of data and like/dislike stuff for hours to get started.
pure web of trust is easily gameable, you can make millions of bots that upvote each other to rank themselves to gain better rank in other people's web of trust.
pure web of trust doesn't have DDOS resistance, someone can completely DDOS the gossip network and prevent you from ever bootstrapping a real web of trust.
also assuming someone would develop a scalable, ux friendly and ddos resistant pure web of trust algorithm, it probably would have a UX that's very different from reddit (and message boards in general), and our goal is to recreate the UX of reddit/message boards exactly, because we like them. The thing we don't like about them is the centralization/commercialization/etc. For example we don't like that reddit killed apollo/rif, we don't like that they ban very popular subs that a lot of people enjoy, etc.
Sure, so bake in a set of default "mods" whose influence goes away as people interact with the moderator system. Start with a CSAM bot, for example (fairly common on Reddit, so there's plenty of prior art here), and allow users to manually opt-in to make those moderators permanent.
I don't think anyone wants a pure web of trust, since that relies on absolute trust of peers, and in a system like a message board, you won't have that trust.
Instead, build it with transitive trust, weighting peers based on how much you align with them, and trust those they trust as bit less, and so on.
Maybe? That really depends on how you design it. If you require a lot of samples before trusting someone (e.g. samples where you align on votes), the bots would need to be pretty long-lived to build clout. And at some point, someone is bound to notice bot-like behaviour and report it, which would impact how much it impacts visible content.
That can happen with any P2P system, yet it's not that common of a problebut
I don't see why it would. All you need is:
Reddit/lemmy has everything but a distinction between agree/disagree and relevant/irrelevant. People tend to use votes as agree/disagree regardless, so having a distinction could lead to better moderation.
You'd need to tweak the weights, but the core algorithm doesn't need to be super complex, just keep track of the N most aligned users and some number of "runners up" so you have a pool to swap the top group with when you start aligning more with someone else. Keep all of that local and drop posts/comments that don't meet some threshold.
It's way more complex than centralized moderation and will need lots of iteration to tune properly, but I think it can work reasonably well at scale since everything is local.