irotsoma

joined 11 months ago
[–] irotsoma@lemmy.blahaj.zone 3 points 1 month ago

Problem isn't the hosting, it's the content licensing. It's difficult to get a legal copy of the content that you can actually possess. Without that, doesn't matter if you are streaming the content through self-hosted servers or playing it locally. It's the content itself that is the real issue. It's often just not "sold" only "licensed" or "rented".

[–] irotsoma@lemmy.blahaj.zone 8 points 3 months ago* (last edited 3 months ago)

They don't want change. That's never been the goal. The goal is to create enemies, but make them people who are not able to fight back. Fascism requires an enemy to "fight" in order to get people to ignore the bad stuff and focus on the fight. Fascism can't survive in a peaceful, happy society. The mistake Nazis made was picking a group that was too large and they didn't already have a noose around.

[–] irotsoma@lemmy.blahaj.zone 2 points 3 months ago

Nah they'll be able to get out, likely for free if they don't mind a bit of detainment first. What they should be doing is finding a new place to work or seeing if their current employers will move them to an office in another country. And big companies should be expanding their offices in other countries to make up for the loss of workers, or moving their offices back to places where American tech workers are willing to live rather than moving to conservative states and then pretending there aren't educated workers for them to hire in the US.

[–] irotsoma@lemmy.blahaj.zone 1 points 3 months ago

The system that scours search results doesn't store the images, but they are stored. Maybe or maybe not by Google, but someone is collecting them and keeping them in order to feed whatever "AI" or hashing algorithm comes next.

And it's actually not the "whole point" in a technical sense. It's mentioned because they want to make it sound less harmful. You'd never compare actual images directly. That would take a ton of storage space and time to compare a large set of files byte for byte. You always use hashes. If it was easier or cheaper to use the images directly, they would, just like the "AI" agents that do this in other systems need the actual images not hashes.

[–] irotsoma@lemmy.blahaj.zone -2 points 3 months ago (3 children)

Problem is that this means the images have to be kept around in order to compare them. So, often these caches of child porn and other non-consensual images which often are poorly secured are targets of hacking and thus end up allowing the images to spread more rather than less. And the people sharing these things don't usually use the services that do this kind of scanning. So in general, it has more negative than positive effect. Instead, education to prevent abuse and support for the abused would be a better use of the money spent ln these things. But more difficult to profit from that and it doesn't support a surveillance state.

[–] irotsoma@lemmy.blahaj.zone 1 points 3 months ago

Depends on what you want. You can have the application have an https certificate which could either be one issued my a globally trusted issuer or could just be a self issued certificate that caddy is configured to trust. And caddy can then add the globally trusted certificates from let's encrypt or whatever. But that definitely requires extra steps. Just, how secure do you want to be?

[–] irotsoma@lemmy.blahaj.zone 11 points 3 months ago

I use forgejo on a raspberry pi.

[–] irotsoma@lemmy.blahaj.zone 12 points 3 months ago

Don't include the non-encoded part of the data or it will corrupt the decryption. The decoder can't tell the difference between data that's not encoded and data that is encoded since it's all text.

[–] irotsoma@lemmy.blahaj.zone 0 points 3 months ago

No surprise. Most software from large public companies is poorly optimized because they value current profits over future sales and in near monopoly markets there's no real fear of losing future sales over poor reputation. So build it good enough to demo and sell as cheaply as possible while sales people are taught to hide the flaws and that's all it takes.

[–] irotsoma@lemmy.blahaj.zone 4 points 3 months ago (1 children)

Do you mean this config option?

[server] 
hosts = 0.0.0.0:5232, [::]:5232

That is binding the service to a network interface and port. For example your computer probably has a loopback interface and an Ethernet interface and WiFi interface. And you can bind to an IPv4 and or IPv6 address on those interfaces. Which ones do you want radicale to listen to traffic from and on what port? The example above listens on all interfaced both IPv4 and IPv6 and uses port 5323 on all. Of course that port must not be in use on any interface. Generally using this notation is insecure, but fine for testing. Put the real IP addresses when you're ready.

[–] irotsoma@lemmy.blahaj.zone 2 points 3 months ago

I mean the whole calculation was ridiculous anyway on how they determined the "net", so it was never true. It's just now it's much more difficult to fudge the numbers since they are using so much dirty power for LLM training.

[–] irotsoma@lemmy.blahaj.zone 6 points 3 months ago

No surprise. The US issued any real action in antitrust cases in many decades. I think the AT&T split is really the only major one I can even remember having any real impact on improving competition. Since then lobbying and campaign contributions have skyrocketed and corporations own most of the government anyway, so there's almost no chance of a court being able to issue a real punishment.

 

I'm starting a project to make my home hosted services exposure to the internet a little easier to keep secure.

I have various web services such as Immich, JellyFin, and a few other services that either have high storage needs and this would be expensive in the cloud, or things that use more private data. Many of these are exposed to the internet. This network has a domain assigned and each service is assigned a subdomain. These are running in a K0s Kubernetes cluster on a separate VLAN from my home devoces on a couple of NUCs and a raspberry pi. And use Traefik reverse proxy and Keycloak OIDC.

I also have a few VPS's running things that need faster responses or don't store as much data. This has a separate domain.

Right now I have an OPNSense router that is the target of all the home domain's traffic using dynamic DNS and that forwards it to Traefik on the Kubernetes cluster.

I'd like to instead close off the home network a bit more so I don't have to devote so much to security and can just drop a lot of the malicious connections coming in regularly. I also have the problem that my ISP still only offers 6rd for IPv6 which is basically useless. So I was considering several tunneling technologies that would have the exit node on a VPS. But also need to be able to access the services while at home without the traffic exiting the network.

I've narrowed in on headscale/tailscale and pangolin. I really like that pangolin uses traefik because I'm already familiar with it and it's already in use in both my domains.

So I'm going to start working on setting up pangolin to see how it goes, but I haven't seen many examples and I haven't seen any that use Kubernetes on the internal network side. Sure I could set up a separate docker instance to host the services, but I really like that kubernetes is able to load balance so that one of my NUCs is almost always in low power mode during off hours when no maintenance tasks are running. So I don't want to put other non-kubernetes services on there nor do I want to have to set up a totally separate server if not necessary.

I haven't dug in too deep yet, so I was hoping to see if anyone else had any experience with setting up pangolin with kubernetes on the internal network side?

view more: next ›