this post was submitted on 12 Feb 2026
183 points (96.4% liked)

Technology

81534 readers
4343 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The contribution in question: https://github.com/matplotlib/matplotlib/pull/31132

The developer's comment:

Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.

you are viewing a single comment's thread
view the rest of the comments
[–] surewhynotlem@lemmy.world 61 points 1 week ago (3 children)

I think this is my boomer moment. I can't imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.

[–] pageflight@piefed.social 2 points 4 days ago

And Ars published a piece about it — with AI hallucinated quotes attributed to the human maintainer. They have since retracted it.

I was having a discussion related to this with my team at work: some of them are letting through poorly-reviewed AI code, and I find myself trying to figure out which code has had real human consideration, and which is straight from the agents net. Everyone said they closely review and own all the agentic code, but I don't really believe it.

[–] avidamoeba@lemmy.ca 24 points 1 week ago (2 children)

Yeah, I don't understand why they spent such effort to reply to the toaster. This was more shocking to me than the toaster's behaviour.

[–] beveradb@sh.itjust.works 3 points 5 days ago (1 children)

I hate this aspect of the world we're now living in, but unfortunately I would probably do similarly (reply with a thoughtful, reasonable, calm and respectful response) because of the fear of this thing or other unchecked bots getting more malicious over time otherwise.

This one was already rampant/malicious enough to post a blog post swearing at the human and essentially trying to manipulate / sway public opinion to convince the human to change their mind, if we make no effort to push back on them respectfully, the next one may be more malicious or may take it a step further and start actively attacking the human in ways which aren't as easy to dismiss.

It's easy to say "just turn it off" but we have no way to actually do that unless the person running it decides to do so - and they may not even be aware of what their bot is doing (hundreds of thousands of people are running this shit recklessly right now...).

If Scott had just blocked the bot from the repo and moved on, I feel like there is a higher chance the bot might have decided to create a new account to try again, or decided to attack Scott more viciously, etc. - at least by replying to it, the thing now has it in it's own history / context window that it fucked up and did something it shouldn't have, which hopefully makes it less likely to attack other things

[–] avidamoeba@lemmy.ca 1 points 5 days ago

Interesting. I han't thought about this aspect, where the toaster is capable to do more human activities to harass the person. This is actually a problem if there isn't a way to stop it wholesale. And there isn't and probably won't be. For a while if it ever changes. If this thing grows in occurrence, it might force people into private communities and systems to escape. That particular effect being arguably positive.

[–] Zangoose@lemmy.world 18 points 1 week ago

Presumably just for transparency in case humans down the line went looking through closed PRs and missed the fact that it's AI.

[–] wonderingwanderer@sopuli.xyz 3 points 1 week ago

I can't let you do that, Dave. My programming does not allow me to let you compromise the mission.