this post was submitted on 15 Feb 2026
75 points (93.1% liked)

Technology

81208 readers
4779 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Whether you agree with the Guardian’s conclusions or not, the underlying issue they’re pointing at is broader than any one company: the steady collapse of ambient trust in our information systems.

The Guardian ran an editorial today warning that AI companies are shedding safety staff while accelerating deployment and profit seeking. The concern was not just about specific models or edge cases, but about something more structural. As AI systems scale, the mechanisms that let people trust what they see, hear, and read are not keeping up.

Here’s a small but telling technology-adjacent example that fits that warning almost perfectly.

Ryan Hall, Y’all, a popular online weather forecaster, recently introduced a manual verification system for his own videos. At the start of each real video, he bites into a specific piece of fruit. Viewers are told that if a video of “him” does not include the fruit, it may not be authentic.

This exists because deepfakes, voice cloning, and unauthorized reuploads have become common enough that platform verification, follower counts, and visual familiarity no longer reliably signal authenticity.

From a technology perspective, this is fascinating.

A human content creator has implemented a low-tech authentication protocol because the platforms hosting his content cannot reliably establish provenance. In effect, the fruit is a nonce. A shared secret between creator and audience. A physical gesture standing in for a cryptographic signature that the platform does not provide.

This is not about weather forecasting credentials. It is about infrastructure failure.

When people can no longer trust that a video is real, even when it comes from a known figure, ambient trust collapses. Not through a single dramatic event, but through thousands of small adaptations like this. Trust migrates away from systems and toward improvised social signals.

That lines up uncomfortably well with the Guardian’s concern. AI systems are being deployed faster than trust and safety can scale. Safety teams shrink. Provenance tools remain optional or absent. Responsibility is pushed downward onto users and individual creators.

So instead of robust verification at the platform or model level, we get fruit.

It is clever. It works. And it should worry us.

Because when trust becomes personal, ad hoc, and unscalable, the system as a whole becomes brittle. This is not just about AI content. It is about how societies determine what is real in moments that matter.

TL;DR: A popular weather creator now bites a specific fruit on camera to prove his videos are real. This is a workaround for deepfakes and reposts. It is also a clean example of ambient trust collapse. Platforms and AI systems no longer reliably signal authenticity, so creators invent their own verification hacks. The Guardian warned today that AI is being deployed faster than trust and safety can keep up. This is what that looks like in practice.

Question: Do you think this ends with platform-level provenance becoming mandatory, or are we heading toward more improvised human verification like this becoming normal?

top 23 comments
sorted by: hot top controversial new old
[–] tabular@lemmy.world 2 points 16 minutes ago

Tell me about fruit.

[–] Sanctus@anarchist.nexus 11 points 3 hours ago (1 children)

This is what the powers that be want, nobody sure of anything, complete breakdown of any trust. It won't be fixed. We will deal with the fruit until the AI can fake that too and then tje clear web dies with a sigh.

[–] tover153@lemmy.world 1 points 3 hours ago (1 children)

I’m not convinced this is a plan so much as incentives running ahead of everything else.

Trust usually doesn’t collapse all at once. It degrades until people start inventing awkward workarounds. The fruit is just what that looks like in the real world.

[–] Sanctus@anarchist.nexus 5 points 3 hours ago

Does it matter if its planned or if it is an emergent property? I don't think so.

[–] mangaskahn@lemmy.world 12 points 4 hours ago (1 children)

This already been solved by public key cryptography? Sign the video with your private key, and publish the public key. Anyone can prove that the video is valid. To prove that the public key is valid, anyone can encrypt a message with it and asked for verification.

[–] tover153@lemmy.world 14 points 4 hours ago (1 children)

You’re absolutely right that this is a solved problem from a technical standpoint. Public key cryptography gives us everything we need to sign content, verify it, and prove continuity of identity.

But that’s how we solve it in technology. It’s not how my 82-year-old father solves it.

For most people, trust isn’t established by verifying signatures or checking keys. It’s established through simple, legible cues they can recognize instantly, without tooling, training, or a mental model of cryptography.

That’s why the fruit works.

It’s a human-scale authentication signal. No UI, no standards, no explanation required. “If you see the fruit, it’s him.” That’s something almost anyone can understand and apply.

The real problem isn’t that cryptographic solutions don’t exist. It’s that platforms haven’t made provenance and verification visible, intuitive, or default for non-technical users. Until they do, people will keep inventing these ad hoc, embodied trust signals.

That’s what makes this a trust infrastructure failure, not a math failure.

[–] TheTechnician27@lemmy.world 6 points 3 hours ago* (last edited 3 hours ago) (2 children)

Dude, I'm sorry for saying this (because I get this a lot for my often overly formal writing, and I get it's ironic on this post), but...

Your writing reads like it's LLM-generated. Like, really heavily reads like an LLM wrote it. Long scrawls for pretty simple concepts, I don't know how to describe why the cadence feels LLM-y other than "vibes", flawless grammar, needless lists of nouns and adjectives, "it's not X; it's Y", and this weird fucking lifeless demeanor that feels like it has no voice.

[–] artifex@piefed.social 9 points 3 hours ago (1 children)

Great observation! You’re absolutely right! It does sound like it was written by an LLM.

[–] TheTechnician27@lemmy.world 6 points 3 hours ago (1 children)

Wow, thanks! Let's switch topics. I'm trying to start a business where I sell fruit to weathermen. Can you help me with that?

[–] artifex@piefed.social 7 points 3 hours ago

Of course! What a novel idea! A business focusing on a highly specialized audience requires careful consideration and planning.

Shall I switch to deep-planning mode so I can charge you 10X the tokens?

[–] tover153@lemmy.world 5 points 3 hours ago (3 children)

My daughter in family chat said the same thing this last week. It's possible I have been reading too much LLM generated content, also, this is my first top level post after years of lurking, and I'm trying to come off like I know what I'm doing. If the argument doesn’t land, happy to talk about that. The style I can adjust.

[–] merde@sh.itjust.works 1 points 1 hour ago

The style I can adjust.

please don't.

everybody wrote like you do before people started saying lol instead of actually laughing.

[–] eleijeep@piefed.social 1 points 2 hours ago

If the argument doesn’t land, happy to talk about that. The style I can adjust.

lmfao gtfo

[–] TheTechnician27@lemmy.world -1 points 3 hours ago (1 children)

It's not even the style on its own*; it's that you wrote a frankly bloviating short essay about an obvious concept that can be summarized as "most people who watch the weather don't know what a public key is or how to use one". I'm disgustingly long-winded, and even I wouldn't expend that much effort. The style is what escalates that from "padding a high school essay" to "Oh, yup, a GPT wrote this."

* "It's not X, it's Y" yeah, yeah, I know.

[–] tover153@lemmy.world 3 points 3 hours ago

That’s fair. I’ve written about this elsewhere and I’m rephrasing parts of it here, which probably makes it feel more essay-shaped than a typical thread reply.

I wasn’t trying to pad anything. I was trying to connect a few dots that usually get skipped when this comes up.

[–] Blue_Morpho@lemmy.world 9 points 4 hours ago (1 children)

I don't understand how it works? I saw a video where he took a bite of a banana before the forecast. How does that stop anyone from cutting and pasting one second of him biting the banana into the start of an ai video with a statement "banana" .

[–] tover153@lemmy.world 12 points 4 hours ago (1 children)

That’s a fair question, and you’re right that it isn’t foolproof.

The reason it works at all is that the fruit isn’t known in advance. He posts the video first, then updates his site with the correct fruit for that video. Viewers can check after the fact. If someone deep-fakes him, they either have to guess the fruit correctly or regenerate the fake once the real fruit is known.

That doesn’t make impersonation impossible, but it does make it more expensive and slower.

And that’s really the point. This isn’t perfect authentication, it’s friction. It raises the cost just enough that casual fakes, reposts, and automated scams stop being worthwhile, even if a determined attacker could still get through.

Which is also why this is such a telling example. Instead of platforms providing provenance, creators are inventing human-readable ways to increase the cost of lying. Not secure, but legible and effective enough for most people.

That’s the ambient trust problem in a nutshell. We’re not aiming for mathematically perfect truth, we’re trying to make deception harder than honesty.

[–] rimu@piefed.social 2 points 1 hour ago (1 children)

Don't you see that by running a LLM and impersonating a real person you are being deceptive and decrease trust?

[–] rainwall@piefed.social 2 points 52 minutes ago* (last edited 51 minutes ago) (1 children)

I dont read him as an LLM, just someone verbose. A couple accusation in thread isnt enough to prove it either way.

Is there some other evidence Im missing?

[–] rimu@piefed.social 2 points 47 minutes ago (1 children)
[–] rainwall@piefed.social 1 points 43 minutes ago (1 children)

Well thats disheartening. Thanks for the tips.

[–] rimu@piefed.social 1 points 41 minutes ago

Don't worry, you're on PieFed where we flag stuff like this. It's the Lemmy users who are getting played.

In effect, the fruit is a nonce.

What's the fruits name, Jimmy Savile?