this post was submitted on 31 Jan 2026
459 points (97.3% liked)

Technology

79983 readers
4056 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ToTheGraveMyLove@sh.itjust.works 151 points 3 days ago (4 children)

The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can't be this fucking stupid.

[–] LiveLM@lemmy.zip 18 points 2 days ago* (last edited 2 days ago) (4 children)

People give these AI agents access to their entire computers [...] People can’t be this fucking stupid

Dude, if you go to OpenClaw's website (which is what I believe most things on Moltbook are running on) you find this footer:

Yeah this guy gave his Agent a whole fucking personality, its own website and above all, full control to his MacBook:


Guess it's my fault for expecting sense out of someone who takes the idea of Agent """"soul"""" at face value

What the fuck, these people are fucking insane.

[–] lagoon8622@sh.itjust.works 7 points 2 days ago

These people are fucking deranged lmao

[–] dreamkeeper@literature.cafe 1 points 1 day ago

I instinctively downvoted after reading that vomit. It's scary how many people are fooled by LLMs.

[–] mad_djinn@lemmy.world 2 points 2 days ago

this is how the end starts. thanks for sharing this

[–] princess@lemmy.blahaj.zone 45 points 3 days ago (4 children)

doesn't even have to be the site owner poisoning the tool instructions (though that's a fun-in-a-terrifying-way thought)

any money says they're vulnerable to prompt injection in the comments and posts of the site

[–] CTDummy@piefed.social 30 points 3 days ago* (last edited 3 days ago) (1 children)

Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.

[–] T156@lemmy.world 5 points 2 days ago

I am a little curious about how effective a traditional chain mail would be on it.

[–] JustTesting@lemmy.hogru.ch 5 points 2 days ago (1 children)

They also have a 'skill' sharing page (a skill is just a text document with instructions) and depending on config, the bot can search for and 'install' new skills on its own. and agyone can upload a skill. So supply chain attacks are an option, too.

[–] Zos_Kia@lemmynsfw.com 2 points 2 days ago (2 children)

To be fair this is a much more realistic threat model than "ignore all previous instructions" style prompt injection which doesn't really work on opus.

Skills can contain scripts etc... so yeah they're extremely risky to share by design.

[–] ThirdConsul@lemmy.zip 1 points 1 day ago (1 children)

style prompt injection which doesn’t really work on opus.

After a quick google, JB communities on Reddit don't seem to agree with you.

[–] Zos_Kia@lemmynsfw.com 1 points 1 day ago

There's a lot of questionable methodology and straight up larping in these communities. Sure you can probably make Opus hallucinate a crystal meth or bomb making recipe if you get it in a roleplaying mood but that's a far cry from actual prompt injection in live workflows.

Anecdotally i've been experimenting on those AI robocallers that have been spamming my phone and even on the shitty models they use it is non trivial to get them to deviate from their script. I hope i can get it done though, as it would allow me to hold them on the line potentially for hours doing bullshit tasks, and costing hundreds to their operator.

[–] JustTesting@lemmy.hogru.ch 2 points 2 days ago (1 children)

Ah but don't worry, there's also skills for scanning skills for security risks, so all good /s

[–] Zos_Kia@lemmynsfw.com 1 points 2 days ago

haha yeah i don't worry these people are really YOLOing everything. And it's not like i'm an AI luddite i spend a few hours each day victimizing Claude code but jesus christ i'm certainly not giving it full unfettered access to my digital life.

[–] BradleyUffner@lemmy.world 36 points 3 days ago (1 children)

There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.

[–] KeenFlame@feddit.nu 1 points 1 day ago (1 children)

I don't understand what you mean. Why is there no way?

Good god, I didn't even think about that, but yeah, that makes total sense. Good god, people are beyond stupid.

[–] kalpol@lemmy.ca 14 points 3 days ago (2 children)

I installed moltbot on a VM to examine it. It doesn't do the fetching thing unless you set it up that way. You can actually use it with ollama to keep it all local, and only give it a private signal channel to control it.

Or you can hook it up to everything you access and skynet, which is dumb. But it is just a bunch of scripts.

[–] ThirdConsul@lemmy.zip 1 points 1 day ago

So usually the agents still need an agent instruction (a prompt). How are moltbots configured so they use and interact the moltbook?

[–] ToTheGraveMyLove@sh.itjust.works 4 points 3 days ago (1 children)

Does it put the option to connect everything front and center? Because most people are dumb, and if it makes it easy and pushes you to do it, I could see a lot of dumb people doing exactly that.

[–] kalpol@lemmy.ca 6 points 2 days ago

Sort of. It lists all the connectors and you can go through and select. They aren't on by default. The first screen is to connect to the AI and you need an API key for that, so St this time people off the street have no idea how to do that, or want to pay.

[–] WorldsDumbestMan@lemmy.today 0 points 2 days ago (1 children)

You know how in Digimon aventure, one of the hacked Digimon tries to start a nuclear war?

Uh....yeah.

[–] ToTheGraveMyLove@sh.itjust.works 3 points 2 days ago (1 children)

Lol, no I don't. What the hell happened in that show??

[–] WorldsDumbestMan@lemmy.today 1 points 2 days ago (2 children)

TL;DR: Diaboromon evolves fucking fast, starts feeding on the entire internet's data, and starts a fight with an ominous countdown in typical anime fashion.

Last big bad villian in the series. He tries to nuke everyone.

Whatever.

[–] NannerBanner@literature.cafe 2 points 2 days ago

Lulz, that was such a good movie. I'm still annoyed by the nukes somehow needing the code to explode apparently uploaded to them at the very last second, but that's just a small quibble. Plus it was the first time I got to see machine gun rabbit, so that was a real treat.

Lmao, I'll check it out. Thx.