tal

joined 2 years ago
[–] tal@lemmy.today 3 points 2 months ago* (last edited 2 months ago)

It definitely is bad, but it may not be as bad as I thought above.

It sounds like they might actually just be relying on certificates pre-issued by a (secured) CA for specific hosts to MITM Web traffic to specific hosts, and they might not be able to MITM all TLS traffic, across-the-board (i.e. their appliance doesn't get access to the internal CA's private key). Not sure whether that's the case


that's just from a brief skim


and I'm not gonna come up to speed on their whole system for this comment, but if that's the case, then you'd still be able to attack probably a lot of traffic going to theoretically-secured internal servers if you manage to get into a customer network and able to see traffic (which compromising the F5 software updates would also potentially permit for, unfortunately) but hopefully you wouldn't be able to hit, say, their VPN traffic.

[–] tal@lemmy.today 14 points 2 months ago* (last edited 2 months ago) (6 children)

F5 said a “sophisticated” threat group working for an undisclosed nation-state government had surreptitiously and persistently dwelled in its network over a “long-term.” Security researchers who have responded to similar intrusions in the past took the language to mean the hackers were inside the F5 network for years.

This could be really bad. F5 produces WAN accelerators, and one feature that those can have is to have X.509 self-signed certificates used by corporate internal CAs stored on them


things that normally, you'd keep pretty damned secure


to basically "legitimately" perform MITM attacks on traffic internal to corporate networks as part of their normal mode of operation.

Like, if an attacker could compromise F5 Networks and get a malicious software update pushed out to WAN accelerators in the field to exfiltrate critical private keys from companies, that could be bad. You could probably potentially MITM their corporate VPNs. If you get inside a customer's network, it'd probably let you get by a lot of their internal security.

kagis

Yeah, it sounds like that is exactly what they hit. The "BIG-IP" stuff apparently does this:

During that time, F5 said, the hackers took control of the network segment the company uses to create and distribute updates for BIG IP, a line of server appliances that F5 says is used by 48 of the world’s top 50 corporations

https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-implementations-11-5-1/10.html

MyF5 Home > Knowledge Centers > BIG-IP LTM > BIG-IP Local Traffic Manager: Implementations > Managing Client and Server HTTPS Traffic using a Self-signed Certificate

One of the ways to configure the BIG-IP system to manage SSL traffic is to enable both client-side and server-side SSL termination:

  • Client-side SSL termination makes it possible for the system to decrypt client requests before sending them on to a server, and encrypt server responses before sending them back to the client. This ensures that client-side HTTPS traffic is encrypted. In this case, you need to install only one SSL key/certificate pair on the BIG-IP system.
  • Server-side SSL termination makes it possible for the system to decrypt and then re-encrypt client requests before sending them on to a server. Server-side SSL termination also decrypts server responses and then re-encrypts them before sending them back to the client. This ensures security for both client- and server-side HTTPS traffic. In this case, you need to install two SSL key/certificate pairs on the BIG-IP system. The system uses the first certificate/key pair to authenticate the client, and uses the second pair to request authentication from the server.

This implementation uses a self-signed certificate to authenticate HTTPS traffic.

Well. That...definitely sucks.

[–] tal@lemmy.today 8 points 2 months ago

For example, its not only illegal for someone to make and sell known illegal drugs, but its additionally illegal to make or sell anything that is not the specifically illegal drug but is analogous to it in terms of effect (and especially facets of chemical structure)

Hmm. I'm not familiar with that as a legal doctrine.

kagis

At least in the US


and this may not be the case everywhere


it sounds like there's a law that produces this, rather than a doctrine. So I don't think that there's a general legal doctrine that would automatically apply here.

https://en.wikipedia.org/wiki/Federal_Analogue_Act

The Federal Analogue Act, 21 U.S.C. § 813, is a section of the United States Controlled Substances Act passed in 1986 which allows any chemical "substantially similar" to a controlled substance listed in Schedule I or II to be treated as if it were listed in Schedule I, but only if intended for human consumption. These similar substances are often called designer drugs. The law's broad reach has been used to successfully prosecute possession of chemicals openly sold as dietary supplements and naturally contained in foods (e.g., the possession of phenethylamine, a compound found in chocolate, has been successfully prosecuted based on its "substantial similarity" to the controlled substance methamphetamine).[1] The law's constitutionality has been questioned by now Supreme Court Justice Neil Gorsuch[2] on the basis of Vagueness doctrine.

But I guess that it might be possible to pass a similar such law for copyright, though.

[–] tal@lemmy.today 2 points 2 months ago

They were forced to sell TikTok's operations in the US.

[–] tal@lemmy.today 3 points 2 months ago

Ah, thanks for the clarification on Lesta.

[–] tal@lemmy.today 6 points 2 months ago* (last edited 2 months ago) (1 children)

You might try to be apolitical, but given that people seem to like echo chambers, if you don't, I bet that a competitor will.

[–] tal@lemmy.today 91 points 2 months ago* (last edited 2 months ago) (6 children)

So, the "don't use copyrighted data in a training corpus" crowd probably isn't going to win the IP argument. And I would be quite surprised if IP law changes to accommodate them.

However, the "don't generate and distribute infringing material" is a whole different story. IP holders are on pretty solid ground there. One thing that I am very certain that IP law is not going to permit is just passing copyrighted data into a model and then generating and distributing material that would otherwise be infringing. I understand that anime rightsholders often have something of a tradition of sometimes letting fan-created material slide, but if generative AI massively reduces the bar to creating content, I suspect that that is likely to change.

Right now, you have generative AI companies saying


maybe legally plausibly


that they aren't the liable ones if a user generates infringing material with their model.

And while you can maybe go after someone who is outright generating and selling material that is infringing, something doesn't have to be commercially sold to be infringing. Like, if LucasArts wants to block for-fun fan art of Luke and Leia and Han, they can do that.

One issue is attribution. Like, generative AI companies are not lying when they say that there isn't a great way to just "reverse" what training corpus data contributed more to an output.

However, I am also very confident that it is very possible to do better than they do today. From a purely black-box standpoint, one possibility would be, for example, to use TinEye-style fuzzy hashing of images and then try to reverse an image, probably with a fuzzier hash than TinEye uses, to warn a user that they might be generating an image that would be derivative. That won't solve all cases, especially if you do 3d vision and generative AI producing models (though then you could also maybe do computer vision and a TinEye-equivalent for 3D models).

Another complicating factor is that copyright only restricts distribution of derivative works. I can make my own, personal art of Leia all I want. What I can't do is go distribute it. I think


though I don't absolutely know what case law is like for this, especially internationally


that generating images on hardware at OpenAI or whatever and then having them move to me doesn't count as distribution. Otherwise, software-as-a-service in general, stuff like Office 365, would have major restrictions on working with IP that locally-running software would not. Point is that I expect that it should be perfectly legal for me to go to an image generator and generate material as long as I do not subsequently redistribute it, even if it would be infringing had I done so. And the AI company involved has no way of knowing what I'm doing with the material that I'm generating. If they block me from making material with Leia, that's an excessively-broad restriction.

But IP holders are going to want to have a practical route to either be able to go after the generative AI company producing the material that gets distributed, or the users generating infringing material and then distributing it. AI companies are probably going to say that it's the users, and that's probably correct. Problem is from a rightsholder standpoint, yeah, they could go after the users before, but if it's a lot cheaper and easier to create the material now, that presents them with practical problems. If any Tom, Dick, and Harry can go out and generate material, they've got a lot more moles to whack in their whack-a-mole game.

And in that vein, an issue that I haven't seen come up is what happens if generative AI companies start permitting deterministic generation of content -- that is, where if I plug in the same inputs, I get the same outputs. Maybe they already do; I don't know, run my generative AI stuff locally. But supposing you have a scenario like this:

  • I make a game called "Generic RPG", which I sell.

  • I distribute


or sell


DLC for this game. This uses a remote, generative AI service to generate art for the game using a set of prompts sold as part of the DLC for that game. No art is distributed as part of the game. Let's say I call that "Adventures A Long Time Ago In A Universe Far, Far Away" or something that doesn't directly run afoul of LucasArts, creates enough distance. And let's set aside trademark concerns, for the sake of discussion. And lets say that the prompts are not, themselves infringing on copyright (though I could imagine them doing so, let's say that they're sufficiently distant to avoid being derivative works).

  • Every user buys the DLC, and then on their computer, reconstitutes the images for the game. At least if done purely-locally, this should be legal under case law

the GPL specifically depends on the fact that one can combine material locally to produce a derivative work as long as one does not then distribute it. Mods to (copyrighted) games can just distribute the deltas, producing a derivative work when the mod is applied, and that's definitely legal.

  • One winds up with someone selling and distributing what is effectively a "Star Wars" game.

Now, maybe training the model on images of Star Wars content so that it knows what Star Wars looks like isn't, as a single step, creating an infringing work. Maybe distributing the model that knows about Star Wars isn't infringement. Maybe the prompts being distributed designed to run against that model are not infringing. Maybe reconstituting the apparently-Star-Wars images in a deterministic fashion using SaaS to hardware that can run the model is not infringing. But if the net effect is equivalent to distributing an infringing work, my suspicion is that courts are going to be willing to create some kind of legal doctrine that restricts it, if they haven't already.

Now, this situation is kind of contrived, but I expect that people will do it, sooner or later, absent legal restrictions.

[–] tal@lemmy.today 35 points 2 months ago (2 children)

Well, we don't live in a bubble. Countries are going to influence people.

the national security risks raised by the Saudi government’s access to and unchecked influence over the sensitive personal information collected from EA’s millions of users

If you don't like that, then maybe tamp down on what game publishers are legally allowed to harvest, and restrict what data can be obtained on various platforms, and what can be accessed. It's not as if other game publishers isolate that data from foreign countries; they can sell data or be purchased for their data. China's TenCent bought Oxygen Not Included a while back, and I'd be more-concerned about national security regarding China than Saudi Arabia. 2C is a Russian publisher that publishes some major milsim games. The Russian state seized control of Lesta Group, the World of Tanks publisher, a while back. Steam games don't normally run in isolation (outside something like flatpak on Linux, which provides a limited amount of isolation). If their software is on your PC, they have read and write access to the data on your PC.

[–] tal@lemmy.today 2 points 2 months ago

Yeah, not disagreeing with him.

[–] tal@lemmy.today 8 points 2 months ago (6 children)

I think that if I were a Chinese multinational company and it becomes a problem, I'd probably just set up an R&D office abroad in a suitable country that doesn't have the degree of political resistance.

Probably still slightly bad for China, but I don't think that it necessarily is going to be some insurmountable problem for Chinese firms.

At this scale of immigration, what matters is going to be the individual's skillset. They aren't going to measurably bolster the country's population. Doesn't really matter that much whether they settle in China and raise kids and such.

[–] tal@lemmy.today 5 points 2 months ago* (last edited 2 months ago)

My guess is that the administration backs down. Maybe they could lose some of those, but if they can't even get Fox News and Newsmax onboard, they're just basically shutting down their media coverage.

EDIT: Also, I'm amazed that the administration managed to dick things up to that degree. I don't have a very high opinion of Hegseth, but if there's one thing that you'd think that his experience would be relevant for, you'd think that he'd at least be able to handle media relations with Fox News. The guy spent the last decade there.

[–] tal@lemmy.today 2 points 2 months ago* (last edited 2 months ago) (2 children)

You probably don't want Iran to have jurisdiction over your dot-com.

https://en.wikipedia.org/wiki/Capital_punishment_in_Iran

Capital punishment is a legal penalty in Iran.[2] The list of crimes punishable by death includes murder; rape; child molestation; homosexuality; drug trafficking; armed robbery; kidnapping; terrorism; burglary; incest; fornication; adultery; sodomy; sexual misconduct; prostitution;[3][4] plotting to overthrow the Islamic government; political dissidence; sabotage; arson; rebellion; apostasy; blasphemy; extortion; counterfeiting; smuggling; recidivist consumption of alcohol; producing or preparing food, drink, cosmetics, or sanitary items that lead to death when consumed or used; producing and publishing pornography; using pornographic materials to solicit sex; capital perjury; recidivist theft; certain military offences (e. g., cowardice, assisting the enemy); "waging war against God"; "spreading corruption on Earth"; espionage; and treason.[5][6] Iran carried out at least 977 executions in 2015, at least 567 executions in 2016,[7] and at least 507 executions in 2017.[8] In 2018 there were at least 249 executions, at least 273 in 2019, at least 246 in 2020, at least 290 in 2021, at least 553 in 2022, at least 834 in 2023,[9] and at least 901 executions in 2024.[10] In 2023, Iran was responsible for 74% of all recorded executions in the world, with the UN confirming that at least 40 people were executed in one week in 2024.

Frankly, 4chan users or operators would probably have violated some of those, were they under jurisdiction of Iranian law.

view more: ‹ prev next ›