litchralee

joined 2 years ago
[–] litchralee@sh.itjust.works 2 points 29 minutes ago* (last edited 28 minutes ago)

At the very minimum, this type of mail would incur the $0.46 non-machinable surcharge because it's smaller than one of the minimum USPS dimensions for postcards, namely that one side has to be at least 5 inches (127 mm exact). You may also have issues with it being too floppy for basic handling by the postal carrier, especially if it was previously left in a warm mailbox.

But perhaps a more practical issue may arise first: will stamps even adhere to the wrapping of a Kraft Cheese single? If you cannot affix postage, that's the most immediate impediment.

[–] litchralee@sh.itjust.works 5 points 23 hours ago* (last edited 23 hours ago)

no rubber for seals

Modern synthetic rubber would indeed be unavailable, but I vaguely recall reading something to the effect that early steam engines used leather seals or something like that.

But yeah, there's a lot of missing prerequisites for machinery. Even simple rotary power -- like from a windmill or waterwheel -- would suffer from being incapable of long distance transmission. Such a limit means the interior lands of a country away from a river or coast would remain unusable for development beyond basic agriculture. No railroads, no A/C, no Phoenix Arizona.

[–] litchralee@sh.itjust.works 3 points 23 hours ago* (last edited 23 hours ago)

should

when it comes to legality

This needs clarification. Are you asking about the legal status of Character AI's chatbot, and how its output would be treated w.r.t. to intellectual property rights? Or about the ethical or moral questions raised by machine-generated content, and whether society or law should adapt to answer those questions?

The former is an objective inquiry, which can be answered based on the current laws for a given jurisdiction. The latter is an open-ended, subjective question for which there is no settled consensus, let alone a firm answer one way or another.

I decline to answer the latter, but I think there's only one answer for the objective law question. IANAL, but existing fanfiction does not imbue its author with rights over characters from another author, at least in the USA. But fanfiction authors do retain copyright over their own contributions.

So if an author writes about the 1920s Mickey Mouse character (now in public domain) but set in a gay space communist utopia, the plot of that novel would be the author's intellectual property. But not the character itself, which remains public domain. However, character development that happens would be the author's property, insofar as such traits didn't exist before.

What aspects of this situation do you envision would require different treatment just because it's the output from a chatbot? Barring specific language in a Terms of Use agreement that transfers ownership to the parent company of Character AI chatbot, machines -- and crested macaques -- are not eligible to own intellectual property. The author would be the human being which set into motion the conditions for the machine to produce a particular output.

In conventional writing, an author does not relinquish ownership to Xerox Corporation just because the final manuscript was printed using a Xerox-made printer. But just because an author uses a machine to help produce a work, that will not excuse plagiarism or intellectual property violations, which will accrue against the human being commiting that act.

I express no opinion on whether intellectual property is still a net positive for society, or not. But I will very clearly lay out the difference between objective conclusions from the law as-written, versus any subjective opinions on how the law ought to be reformed, if at all. After all, what is not understood cannot be effectively changed.

[–] litchralee@sh.itjust.works 0 points 3 days ago* (last edited 3 days ago) (2 children)

Obligatory reference to desire paths: !desire_paths@sh.itjust.works

Traffic -- under foot or otherwise -- is one way to keep a path in decent shape

[–] litchralee@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago)

I agree with this comment, and would suggest going with the first solution (NAT loopback, aka NAT hairpin) rather than split-horizon DNS. I say this even though I have a strong dislike of NAT (and would prefer to see networks using flat IPv6 addresses, but that's a different topic). It should also be fairly quick to configure the hairpin on your router.

Specifically, problems arise when using DNS split-horizon where the same hostname might resolve to two different results, depending on which DNS nameserver is used. This is distinct from some corporate-esque DNS nameservers that refuse to answer for external requests but provide an answer to internal queries. Whereas by having no "single source of truth" (SSOT) for what a hostname should resolve to, this will inevitably make future debugging harder. And that's on top of debugging NAT issues.

Plus, DNS isn't a security feature unto itself: successful resolution of internal hostnames shouldn't increase security exposure, since a competent firewall would block access. Some might suggest that DNS queries can reveal internal addresses to an attacker, but that's the same faulty argument that suggests ICMP pings should be blocked; it shouldn't.

To be clear, ad-blocking DNS servers don't suffer from the ails of split-horizon described above, because they're intentionally declining to give a DNS response for ad-hosting hostnames, rather than giving a different response. But even if they did, one could argue the point of ad-blocking is to block adware, so we don't really care if SSOT is diminished for those hostnames.

[–] litchralee@sh.itjust.works 1 points 2 weeks ago (1 children)

which means DNS entries in a domain, and access from the internet

The latter is not a requirement at all. Plenty of people have publicly-issued TLS certs for domain named services that aren't exposed to the public internet, or aren't using HTTP(s). If using LetsEncrypt, the DNS-01 challenge method would suffice, or can even issue a wildcard certificate for subdomains, so additional certificate issuance is not required.

If after acquiring a domain, said domain can be pointed to one of many free nameservers that provide an API which can be updated from an ACME script for automatic renewal of the LetsEncrypt certificate using DNS-01. dns.he.net is one such example.

OP has been given a variety of options, each of which come with their own tradeoffs. But public access to Jellyfin just to get a public cert is not a necessary tradeoff that OP needs to make.

[–] litchralee@sh.itjust.works 3 points 2 weeks ago* (last edited 2 weeks ago)

Not "insecure" in the sense that they're shoddy with their encryption, no. But being free could possibly mean their incentives are not necessarily aligned with that of the free users.

In security speak, the CIA triad stands for Confidentiality, Integrity, and Availability. I'm not going to unduly impugn Proton VPN's credentials on data confidentiality and data integrity, but availability can be a legit security concern.

For example, if push comes to shove and Proton VPN is hit with a DDoS attack, would free tier users be the first to be disconnected to free up capacity? Alternatively, suppose the price for IP transit shoots through the roof due to weird global economics and ProtonVPN has to throttle the free tier to 10 Mbps. All VPN operators share these possibilities, but however well-meaning Proton VPN and the non-profit behind them are, economic factors can force changes that aren't great for the free users.

Now, the obv solution at such a time would be to then switch to being a paid customer. And that might be fine for lots of customers, if that ever comes to pass. But Murphy's Law makes it a habit that this scenario would play out when users are least able to prepare for it, possibly leading to some amount of unavailability.

So yes, a holistic analysis of failure points is precisely what proper security calls for. Proton VPN free tier may very well be inappropriate. But whether it rises to a serious concern or just warrants an "FYI", that will vary based on individual circumstances.

[–] litchralee@sh.itjust.works 2 points 2 weeks ago (5 children)

Don't. OP already said in the previous post that they only need Jellyfin access within their home. The Principle of Least Privilege tilts in favor of keeping Jellyfin off the public Internet. Even if Jellyfin were flawless -- and no program is -- the only benefit that accrues to OP is that the free tier of ProtonVPN can access Jellyfin.

Opening a large attack surface for such a modest benefit is letting the tail wag the dog. It's adding a kludge to workaround a different kludge, the latter being ProtonVPN's very weird paid tier.

[–] litchralee@sh.itjust.works 9 points 2 weeks ago* (last edited 2 weeks ago) (7 children)

I previously proffered some information in the first thread.

But there's something I wish to clarify about self-signed certificates, for the benefit of everyone. Irrespective of whichever certificate store that an app uses -- either its own or the one maintained by the OS -- the CA Browser Forum, which maintains the standards for public certificates, prohibits issuance of TLS certificates for reserved IPv4 or IPv6 addresses. See Section 4.2.2.

This is because those addresses will resolve to different machines on different networks. Whereas a certificate for a global-scope IP address is fine because it should resolve to the same destination. If certificate authorities won't issue certs for private IP addresses, there's a good chance that apps won't tolerate such certs either. Nor should they, for precisely the reason given above.

A proper self-signed cert -- either for a domain name or a global-scope IP address -- does not create any MITM issues as long as the certificate was manually confirmed the first time and added to the trust store, either in-app or in the OS. Thereafter, only a bona fide MITM attack would raise an alarm, the same as if a MITM attacker tries to impersonate any other domain name. SSH is the most similar, where trust-on-first-connection is the norm, not the outlier.

There are safe ways to use self-signed certificate. People should not discard that option so wontonly.

[–] litchralee@sh.itjust.works 0 points 2 weeks ago (1 children)

Physical wire tapping would be mostly mitigated by setting every port on the switch to be a physical vlan

Can you clarify on this point? I'm not sure what a "physical VLAN" would be. Is that like only handling tagged traffic?

I'm otherwise in total agreement that the threat model is certainly not typical. But I can imagine a scenario like a college dorm where the L2 network is owned by a university, and thus considered "hostile" to OP somehow. OP presented their requirements, so good advice has to at least try to come up with solutions within those parameters.

[–] litchralee@sh.itjust.works 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

I had a small typo where "untrusted" was written as "I trusted". That said, I think we're suggesting different strategies to address OP's quandary, and either (or both!) would be valid.

My suggestion was for encrypted L3 tunneling between end-devices which are trusted, so that even an untrustworthy L2 network would present no issue. With technologies like WireGuard, this isn't too hard to do for mobile phone clients, and it's well supported for Linux clients.

If I understand your suggestion, it is to improve the LAN so that it can be trusted, by way of segmentation into VLANs which separate the trusted devices from the rest. The problem I see with this is that per-port VLANs alone do not address the possibility of physical wire-tapping, which I presumed was why OP does not trust their own LAN. Perhaps they're running cable through a space shared with other tenants, or something like that. VLANs help, but MACsec encryption on the wire paired with 802.1x device certificate for authentication is the gold standard for L2 security.

But seeing as that's primarily the domain of enterprise switches, the L3 solution in software using WireGuard or other tunneling technologies seems more reasonable. That said, the principle of Defense In Depth means both should be considered.

view more: next ›