this post was submitted on 11 Feb 2026
273 points (98.2% liked)

Technology

81026 readers
4672 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy.

top 50 comments
sorted by: hot top controversial new old
[–] LiveLM@lemmy.zip 3 points 50 minutes ago

I am so glad I no longer interact with that dumpster fire of a social network. It's like the Elon takeover and the monetization program brought out every weirdo in the world out of the woodwork

[–] aeration1217@lemmy.org 5 points 1 hour ago

Sounds about right for x users

[–] Dyskolos@lemmy.zip 26 points 9 hours ago* (last edited 9 hours ago) (1 children)

late sex offender Jeffrey Epstein

I'm so done with all the whitewashing. "Sex offender" sounds like I behaved wrong in consensual sex. What this prick was is a pedophile. A child rapist. A kid-abuser and -rapist. But surely no "late financier" or whatever else media chose over the facts.

[–] pinball_wizard@lemmy.zip 1 points 2 hours ago

Also a slaver and child abductor.

[–] Paranoidfactoid@lemmy.world 40 points 12 hours ago (4 children)

How do these AI models generate nude imagery of children without having been trained with data containing illegal images of nude children?

[–] AnarchistArtificer@slrpnk.net 32 points 11 hours ago

The datasets they are trained on do in fact include CSAM. These datasets are so huge that it easily slips through the cracks. It's usually removed whenever it's found, but I don't know how this actually affects the AI models that have already been trained on that data — to my knowledge, it's not possible to selectively "untrain" models, and they would need to be retrained from scratch. Plus I occasionally see it crop up in the news about how new CSAM keeps being found in the training data.

It's one of the many, many problems with generative AI

[–] calcopiritus@lemmy.world 2 points 6 hours ago (2 children)

Tbf it's not needed. If it can draw children and it can draw nude adults, it can draw nude children.

Just like it doesn't need to have trained on purple geese to draw one. It just needs to know how to draw purple things and how to draw geese.

[–] WraithGear@lemmy.world 6 points 6 hours ago* (last edited 6 hours ago) (1 children)

that’s not true, a child and an adult are not the same. and ai can not do such things without the training data. it’s the full wine glass problem. and the only reason THAT example was fixed after it was used to show the methodology problem with AI, is because they literally trained it for that specific thing to cover it up.

[–] Jarix@lemmy.world 3 points 4 hours ago (1 children)

I'm not saying it wasnt trained on csam or defending any AI.

But your point isn't correct

What prompts you use and how you request changes can get same results. Clever prompts already circumvent many hard wired protections. It's a game of whackamole and every new iteration of an AI will require different methods needed bypass those protections.

If you can ask it the right ways it will do whatever a prompt tells it to do

!You can't tell it to make a nude image of a child, I assume, but you can tell it make the subject in the image of the last prompt 60% smaller and adjust it as necessary to make it believable.!< That probably shouldnt work but I don't put anything passed these assholes.

It doesn't take actual images/data trained if you can just tell it how to get the results you want it to by using different language that it hasn't been told not to accept.

The AI doesn't know what it is doing, it's simply running points through its system and outputting the results.

[–] MathiasTCK@lemmy.world 1 points 3 hours ago

It still seems pretty random. So they'll say they fixed it so it won't do something, all they likely did was reduce probability, so we still get screenshots showing what it sometimes lets through.

[–] slampisko@lemmy.world 2 points 6 hours ago (1 children)

That's not exactly true. I don't know about today, but I remember about a year ago reading an article about an image generation model not being able, with many attempts, to generate a wine glass full to the brim, because all the wine glasses the model was trained on were half-filled.

[–] calcopiritus@lemmy.world 1 points 6 hours ago (1 children)

Did it have any full glasses of water? According to my theory, It has to have data for both "full" and "wine"

[–] vala@lemmy.dbzer0.com 1 points 4 hours ago

Your theory is more or less incorrect. It can't interpolate as broadly as you think it can.

[–] RedGreenBlue@lemmy.zip 8 points 11 hours ago

Can't ask them to sort that out. Are you anti-ai? That's a crime! /s

[–] Senal@programming.dev 3 points 11 hours ago

Easy answer is , they don't

Though that's just the one admitting to it.

A lightly more nuanced answer is , it probably depends, there's likely to be some inference made between age ranges but my guess is that it'd be sub-par given that it sometimes struggles with reproducing images it has a tonne of actual data for.

[–] ToTheGraveMyLove@sh.itjust.works 73 points 14 hours ago* (last edited 14 hours ago) (7 children)

Are these people fucking stupid? AI can't remove something hardcoded to the image. The only way for it to "remove" it is by placing a different image over it, but since it has no idea what's underneath, it would literally just be making up a new image that has nothing to do with the content of the original. Jfc, people are morons. I'm disappointed the article doesn't explicitly state that either.

[–] mfed1122@discuss.tchncs.de 3 points 6 hours ago

They think that the AI is smart enough to deduce from the pixels around it what the original face must have looked like, even though there's actually no reason why there should be a strict causal relationship between those things.

[–] usualsuspect191@lemmy.ca 39 points 13 hours ago* (last edited 12 hours ago) (4 children)

The black boxes would be impossible, but there are some types of blur that keep enough of the original data they can be undone. There was a pedofile that used a swirl to cover his face in pictures and investigators were able to unswirl the images and identify him.

With how the rest of it has gone it wouldn't surprise me if someone was incompetent enough to use a reversible one, although I have doubts Grok would do it properly.

Edit: this technique only works for video, but maybe if there are several pictures of the same person all blurred it could be used there too?

https://youtu.be/acKYYwcxpGk

[–] floquant@lemmy.dbzer0.com 3 points 7 hours ago (1 children)

Yeah, but this type of machine learning and diffusion models used in image genAI are almost completely disjoint

[–] usualsuspect191@lemmy.ca 1 points 6 hours ago

Agree with you there. Just pointing out that in theory and with the right technique, some blurring methods can be undone. Grok most certainly is the wrong tool for the job.

[–] BarneyPiccolo@lemmy.today 14 points 12 hours ago

Several years ago, authorities were searching the world for a guy who had been going around the world, molesting children, photographing them, and distributing them on the Internet. He was often in the photos, but he had chosen to use some sort of swirl blur on his face to hide it. The authorities just "unswirled" it, and there was his face, in all those photos of abused children.

They caught him soon after.

[–] Barracuda@lemmy.zip 11 points 12 hours ago (1 children)

A swirl is a distortion that is non-destructive. Am anonymity blur averages out pixels over a wide area in a repetitive manner, which destroys information. Would it be possible to reverse? Maybe a little bit. Maybe one pixel out of every %, but there wouldn't be any way to prove the accuracy of that pixel and there would be massive gaps in information.

[–] altkey@lemmy.dbzer0.com 9 points 11 hours ago (1 children)

Swirl is destfuctive like almost everything in raster graphics with recompressing, but unswirling it back makes a good approximation in somehow reduced quality. If the program or a code of effect is known, e.g. they did it in Photoshop, you just drag a slider to the opposite side. Coming to think of it, it could be a nice puzzle in an adventure game or one another kind of captcha.

[–] Barracuda@lemmy.zip 1 points 5 hours ago

You're right. I meant more by "non-destructive" that it is, depending on factors like intensity and known algorithm, reversible.

load more comments (1 replies)
[–] PostaL@lemmy.world 1 points 8 hours ago

Hey! Cut it out! If those people could read, they'd be very upset!

[–] Pyr_Pressure@lemmy.ca 7 points 13 hours ago* (last edited 13 hours ago) (5 children)

There was someone who reported that due to the incompetence of whitehouse staffers, some of the Epstein files had simply been "redacted" in ms word by highlighting the text black, so people were actually able to remove the redactions by turning the pdf back into word and removing the black highlighting to reveal the text.

Who knows if some of the photos might be the same issue.

[–] KyuubiNoKitsune@lemmy.blahaj.zone 11 points 12 hours ago (2 children)

That's, not how images like png or jpgs work.

load more comments (2 replies)
load more comments (4 replies)
load more comments (2 replies)
[–] pkjqpg1h@lemmy.zip 132 points 17 hours ago (6 children)

unblur the face with 1000% accuracy

They have no idea how this models work :D

[–] pkjqpg1h@lemmy.zip 180 points 17 hours ago (2 children)
[–] cupcakezealot@piefed.blahaj.zone 85 points 17 hours ago (1 children)

biblically accurate cw casting

[–] Ulrich@feddit.org 3 points 9 hours ago

CW? The TV show?

[–] TheBat@lemmy.world 29 points 16 hours ago

Barrett O'Brien

[–] criss_cross@lemmy.world 17 points 13 hours ago (1 children)

It’s the same energy as “don’t hallucinate and just say if you don’t know the answer”

[–] pkjqpg1h@lemmy.zip 7 points 12 hours ago

and don't forget "make no mistakes" :D

[–] annoyed_onion@lemmy.world 46 points 17 hours ago

Though it is 2026. Who's to say Elon didn't feed the unredacted files into grok while out of his face on ket 🙃

[–] otter@lemmy.ca 31 points 17 hours ago* (last edited 12 hours ago) (4 children)

It feels like being back on the playground

"nuh uh, my laser is 1000% more powerful"

"oh yea, mine is ~~googleplex~~ googolplex percent more powerful"

load more comments (4 replies)
[–] Armand1@lemmy.world 11 points 17 hours ago

Or percentages

[–] nymnympseudonym@piefed.social 19 points 15 hours ago (1 children)

I doubt any of these people are accessing X over Tor. Their accounts and IPs are known.

In a sane world, they'd be prosecuted.
In MAGAMERICA, they are protected by the Spirit of Epstein

[–] clay_pidgin@sh.itjust.works 4 points 12 hours ago

What crime do you imagine they would be committing?

I don't know what they hope to gain by seeing the kid's face, unless they think they can match it up with an Epstein family member or something (seems unlikely to be their goal).

[–] melsaskca@lemmy.ca 10 points 14 hours ago (1 children)

Of course they are. Who's left on Twitter nowadays? Elon acolytes?

load more comments (1 replies)
[–] SpicyLizards@reddthat.com 6 points 15 hours ago

And gruk, being trained on elons web history, doesn't need to be asked to find, let alone unblur said images.

load more comments
view more: next ›