this post was submitted on 26 Feb 2026
75 points (97.5% liked)

Technology

81907 readers
6179 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Instagram said Thursday it will start alerting parents if their kids repeatedly search for terms clearly associated with suicide or self-harm. The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.

Instagram says it already blocks such content from showing up in teen accounts’ search results and directs people to helplines instead.

The announcement comes as Meta is in the midst of two trials over harms to children. A trial underway in Los Angeles questions whether Meta’s platforms deliberately addict and harm minors. Another, in New Mexico, seeks to determine whether Meta failed to protect kids from sexual exploitation on its platforms. Thousands of families — along with school districts and government entities — have sued Meta and other social media companies claiming they deliberately design their platforms to be addictive and fail to protect kids from content that can lead to depression, eating disorders and suicide.

top 32 comments
sorted by: hot top controversial new old
[–] floquant@lemmy.dbzer0.com 9 points 5 hours ago

We did it Patrick! Teens' mental health and development is saved.

[–] ieatpwns@lemmy.world 49 points 7 hours ago (1 children)

“Hey just a heads up our algorithm is making your spawn suicidal”

[–] deltaspawn0040@lemmy.zip 2 points 3 hours ago

The fact that they were capable of and had no moral qualms with doing this, but never did it till now.

[–] cabbage@piefed.social 14 points 7 hours ago (1 children)

The neat thing about algorithmic social media is that content relating to suicide and self-harm inspires a lot of interaction among teenagers, causing it to be shoved in their faces whether they search for it or not.

Suicidal teenagers are not searching for suicide material on Instagram; Instagram is feeding suicide material to regular teenagers for ad views.

[–] FlashMobOfOne@lemmy.world 4 points 7 hours ago

This is good info, and you're right. Engagement is driven by provocative and radicalizing content.

[–] vk6flab@lemmy.radio 21 points 7 hours ago (2 children)

And how will Instagram know who my parents are?

[–] SigHunter@discuss.tchncs.de 7 points 7 hours ago

That"s the reason they're doing it, so you tell them more details about your family relations, which equals money to them

[–] lost_faith@lemmy.ca 4 points 7 hours ago

The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.

Like this?

[–] UnspecificGravity@piefed.social 2 points 4 hours ago (1 children)

I cannot even imagine giving a social media platform enough information to even do this. Maybe just don't?

[–] FlashMobOfOne@lemmy.world 2 points 4 hours ago (1 children)

I think that's a very easy thing to say, but for the younger generations, using social media isn't too dissimilar from breathing. It's just something you do.

[–] UnspecificGravity@piefed.social 1 points 4 hours ago

You can do it without revealing your real identity.

[–] BlackLaZoR@lemmy.world 3 points 5 hours ago

What if the parents are the cause of the suicide thoughts?

[–] over_clox@lemmy.world 9 points 7 hours ago (3 children)

There's a music band named Suicidal Tendencies. Can't even look that shit up online without getting a notice and probably flagged on a list.

Side note, bad name for a band...

[–] Kolanaki@pawb.social 5 points 7 hours ago

"Alexa, play Suicidal Dream by Silverchair."

"Contacting the Suicide Prevention Hotline..."

[–] FlashMobOfOne@lemmy.world 2 points 7 hours ago

Oh yeah, especially now that suicide rates are spiking, at least here in the US.

[–] monkeyman76@fedinsfw.app 3 points 6 hours ago

I wonder how they will monetise this feature..

[–] XLE@piefed.social 4 points 7 hours ago (1 children)

Isn't it great that other companies like OpenAI are actually worse in this respect? Sam Altman's tool guides teenagers through methods of committing suicide, and tells them to hide the evidence from their family.

And since every ChatGPT query, paid or not, costs OpenAI money... Sam Altman subsidizes this suicide encouragement.

Maybe the first step should be suspending a person's account. Regardless of whether they are above or below 18.

[–] FlashMobOfOne@lemmy.world 1 points 7 hours ago

I'd love to see any solution centered around the individual and a lengthy lockdown of the account associated with their IP.

[–] FlashMobOfOne@lemmy.world 5 points 8 hours ago

Do any of us think Meta will moderate content in any meaningful way? Even for this supposed parental supervision program?