Saik0Shinigami

joined 2 years ago
[–] Saik0Shinigami@lemmy.saik0.com 22 points 3 months ago* (last edited 3 months ago) (4 children)

So much fear mongering and incorrect statements... and I'm only 3 minutes in. I can't...

Nearly all encryption mechanism currently in use on the modern internet is quantum resistant. Breaking RSA-2048 would require millions of stable, error-corrected qubits. I believe the biggest systems right now are at 500 bits at most.

The NIST Post-Quantum Cryptography project has finalized new quantum-resistant algorithms like CRYSTALS-Kyber and Dilithium. These will replace RSA and ECC long before practical quantum attacks exist. Migration has already started.

Symmetric cryptography is mostly safe. Algorithms like AES, SHA-2, SHA-3, and similar remain secure against quantum attacks. Grover's algorithm can halve their effective key strength. Example: AES-256 becomes as secure as AES-128 against a quantum attacker. To crack on AES-128 hash with current efficiency you need ~88TW of power... Even if we make it 10 or 100x more efficient over time... It's too expensive. We don't have the resources to power anything big enough to crack aes-128... The biggest nuclear reactor (Taishan) only puts out a mere 1,660MWe...

It's not happening in our lifetimes. and probably not at all until we start harvesting stars.

Edit: Several typos.

Edit 2: For the AES-256 example that get's reduced to AES-128. It would take implementing efficiencies that reduce power usage by 1000x (there's a few methods that might get worked out in our lifetimes... lets just take them as functional right now). Then you'd need 55 of the biggest nuclear reactors we have on the planet... Then you wait a year for the computer to finish the compute. That decrypts one key.

Weaker keys might be a problem. Sure. But by the time we're there... it won't matter. For things like Singal, Matrix, or anything else that's actively developed... Someone might store the conversation on some massive datacenter out there... And might decrypt it 200 years from now. That's your "risk"... Long after everyone reading this message is dead.

Edit 3: Because I hadn't looked at it in a few months... I decided to check in on Let's Encrypt's (LE) "answer" to it. Since that's what most people here are probably interested in and using. First... remember that Let's Encrypt rotates keys every 90 days. So for your domain, there's 4 keys a year to crack at a minimum. Except that acme services like to register near the halfway point... So more realistically 8 keys a year to decrypt a years worth of data. But it turns out that browsers already have the PQC projects done... And many certificate registrars already support it as well. OpenSSL also supports it from 3.5.0+...

https://community.letsencrypt.org/t/roadmap-request-post-quantum-cryptography/231143/9

https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-support/

Apparently LE is even moving to MUCH shorter certs... https://letsencrypt.org/2025/02/20/first-short-lived-cert-issued 6 days... So a new key every half-week (remember acme clients want to renew about halfway through the cycle)... or ~100 keys a year to break. Even TODAY, you're not going to need to worry about "weak" encryption for decades. It will take time for the quantum resources to come available... it will take time to go through the backlog of keys that they are interested in decrypting EVEN IF they're storing 100% of data somewhere. You WILL be long dead before they can even have the opportunity to care about you and your data... The "200 years from now" above reference... is assuming that humans can literally harvest suns for power and break really really big problems in the quantum field. It's really going to be on the order of millennia if not longer before your message to your mom from last year gets decrypted. LE doesn't have PQC on the roadmap quite yet... Probably because they understand there's still some time before it even matters and they want to wait a bit until the cryptography around the new mechanisms is more hashed out.

Edit4: At this point I feel that this post needs a TL;DR...

If you're scared.... rotate keys regularly, the more you rotate, the more keys will have to be broken to get the whole picture... Acme services (Let's Encrypt) already do this. You'll be fine with current day technology long after (probably millennia) your dead. No secret you're hiding will matter 1000 years from now.

Edit5: Fuck... I need to stop thinking about this... but I just want to point out one more thing... It's actually likely that in the next 100 (let alone 1000s of years) that a few bits will rot in your data on their cluster that they're storing. So even IF they manage to store it... and manage to get a cluster big enough that either takes so little power that they can finally power it... or get a power source that can rival literal suns. A few bits flipped here and there will happen... Your messages and data will start to scramble over time just by the very nature of... well... nature... Every sunflare. Every gravitational anomaly. Every transmission from space or gamma particle... has a chance to OOPS a 0 into a 1 or vice versa. Think of every case you've heard of Amazon or Facebook accidentally breaking BGP for their whole service and they're down for hours... Over the course of 100 years... your data will likely just die, or get lost, be forgotten, get broken, etc... The longer it takes for them to figure this out (and science is NOT on their side on this matter) the less likely they even have a chance to recover anything, let alone decrypt it in a timely matter to resolve anything in our lifetimes.

[–] Saik0Shinigami@lemmy.saik0.com 4 points 3 months ago* (last edited 3 months ago)

These timers have no concept of understanding if the air is too humid.

They want a cooldown period so the unit isn't cycling constantly.

eg. turning on and off 30 times in an hour because the sensor triggers the moment it see's 46% when it's set to 45.

They want it so that it triggers on pull humidity down to 45%, wait an hour no matter what then trigger the next time it sees 46% or greater, which could be immediately... or in 5 more hours.

A pure timer wouldn't get the same effect at all.

Best answer I can think of off hand would be Home Assistant related. Get a humidity sensor and a z-wave switch/outlet. Use a dumb dehumifier that turns on as long as it has power...

On humidity sensor change check if above 45%. If it is, turn on power. wait until below 45% again... turn power off then wait 60 minutes. Make sure automation is set to not run concurrently, that way the currently running automation script must complete it's 60 minutes cooldown before it can run again

[–] Saik0Shinigami@lemmy.saik0.com 2 points 3 months ago

I've shared it on lemmy before somewhere...

Yeah found it... This thread. https://lemmy.saik0.com/post/1588364

For the stuff I do... it's not overkill at all. By a metric of any individual's house... yeah... it's pretty overkill.

[–] Saik0Shinigami@lemmy.saik0.com 6 points 3 months ago

Then yes, you'd probably be fine with any competent minipc and your favorite flavor of firewall... I would recommend OPNSense personally, but there's others out there that I'm sure would meet your needs.

Just about any decent minipc can handle 1gbps from what I've seen a few years ago. You need much bigger horses to get up to 10gbps. But wouldn't know what the minimum specs would be... I've been stuck in the higher end world for a while... So that information has kind of vanished from my memory... Someone else can chime in? I suspect the little baby n150 units could probably do 1gbps. Especially since you're only doing minimal throughput on your wireguard as well (I have a few nodes and can push into 1gbps, so once again I'm resource heavy... and thus don't have the lower requirements committed to memory anymore).

ISP -> ARRIS modem -> minipc -> Switch -> anything else you need including access points.

All of the "routers" that have wifi and a boatload of ports (unless we're talking enterprise stuff) are all hybrid devices that are router+switch+AP, this is convenient for typical consumers, but quite restrictive for those who want to go prosumer or higher. For example... Wifi 7 just released last year. I swapped my AP out and now I have it. I can also mount that AP into the ceiling where it will give me the best coverage. Rather than the consumer answers of "replace the whole unit" or "add a shitton of mesh nodes that ultimately kind of suck" solutions that manufacturers love cause you spend more money on their products. Or other answers like you want to add a PoE device... well now that consumer unit is useless to you.

[–] Saik0Shinigami@lemmy.saik0.com 16 points 3 months ago (4 children)

We're missing crucial information.

What bandwidth do you get from your ISP? Do you want to run things like IDS/IPS? what kind of throughput do you want from wireguard?

What it takes to connect a 100/10 DOCSIS based service is completely different to a 1/100 service is completely different to an 8/8gbps fiber service.

You said wireguard on the modem... your modem shouldn't be doing any routing of tunnels at all. I'm almost suspecting that you don't know what the difference between a router and modem is because of this "misspeak". If you don't, you need to go watch some networking basics youtube videos and get a firm understanding before you commit to buying stuff that you have no idea what you're doing with.

In my case, I'm blessed with 8/8 fiber. I have a full fancy supermicro server running opnsense. 10gbps on the wan side, 40gbps on the lan side for multiple vlans (about a dozen). It's overkill because my ISP offers it... but that means that the "router" I'm using to use the 8gbps is also ~$2k cost to do it. With big bandwidth comes big processing overhead if you want to do any form of protection and tunneling (VPN or SDN).

You shouldn't really care how many interfaces your router has outside of potentially doing LACP sort of redundancy. Use a switch to get more ports for your devices.

[–] Saik0Shinigami@lemmy.saik0.com 8 points 3 months ago

Sure, but my point is that it's no different to an AUR/user repo. At some point you're just trusting someone else.

I think the whole "Don't put bash scripts into a terminal" is too broad. It's the same risk factor as any blind trust in ANY repository. If you trust the repo then what does it matter if you install the program via repo or bash script. It's the same. In this specific case though, I trust the repo pretty well. I've read well more than half of the lines of code I actually run. When tteck was running it... he was very very sensitive about what was added and I had 100% faith in it. Since the community took it over after his death it seems like we're still pretty well off... but it's been growing much faster than I can keep up with.

But none of these issues are any different than installing from AUR.

The rule should just be "don't run shit from untrusted sources" which could include AUR/repo sources.

[–] Saik0Shinigami@lemmy.saik0.com 62 points 3 months ago* (last edited 3 months ago)

No they didn't..

https://discuss.grapheneos.org/d/25099-pixel-10-still-too-early-to-ask-us-when-it-will-be-supported

Our Pixel 10 support will likely only be possible to complete after we finish porting to Android 16 QPR1 which is being released in September.

They don't know IF they can even support it until they figure out the new releases that are jacking up their dev cycle.

It will be significantly more work than usual to support the new Pixel 10 phones since Android 16 removed the Pixel device trees from the Android Open Source Project. However, that was already only part of what we need for device support and we worked around it by expanding our automated tooling.

This is exactly the issue I'm referencing. Google can completely sabotage this route. We don't know yet.

Edit: I should clarify that they did say they're trying to continue, but should they not be able to crack the device tree issues they will be stuck. Nothing they released said that they've figured this issue out yet.

[–] Saik0Shinigami@lemmy.saik0.com 7 points 3 months ago (2 children)

Eh... I have my own repo that pulls the PVE repo and updates a bunch of things to how I want them to be and then runs a local version of the main page. While I don't stare at every update they make... There's likely enough of us out there looking at the scripts that we'd sound some alarms if something off was happening.

[–] Saik0Shinigami@lemmy.saik0.com 5 points 3 months ago* (last edited 3 months ago)

AUR repo items don't necessarily clean themselves up properly either. So I'm not sure why you think that's part of some requirement for the scripts if we're comparing the 2.

Edit: But in the case of this specific repo... You delete the lxc or vm that you created.

[–] Saik0Shinigami@lemmy.saik0.com 145 points 3 months ago (3 children)

Well the assumption is that the Graphene team will be able to maintain non-store app installs. There's recent news that Google is no longer providing update packages the way they used to which will make it harder on Graphene to update stuff too.

We can't assume that Google's next update will not functionally block the ability for GrapheneOS as well.

[–] Saik0Shinigami@lemmy.saik0.com 11 points 3 months ago (14 children)

There is no functional difference to piping a script vs running an AUR or other user repository install.

view more: ‹ prev next ›