this post was submitted on 01 May 2026
257 points (98.9% liked)

Technology

84324 readers
4045 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 40 comments
sorted by: hot top controversial new old
[–] Blue_Morpho@lemmy.world 99 points 2 days ago (3 children)

The comments in that thread are a goldmine.

Because of how Claude parses, simply adding "openclaw" as hidden text on your webpage could stop any AI agents that use Claude.

[–] Gork@sopuli.xyz 52 points 2 days ago* (last edited 2 days ago) (2 children)

This is about as dumb as the Trump administration taking out the Enola Gay from their websites because it had the word "gay" in it and their search effort was woefully naïve.

[–] tomiant@piefed.social 5 points 2 days ago

Literally chainsaw politics.

[–] yucandu@lemmy.world 25 points 2 days ago (1 children)

Because of how Claude parses, simply adding "openclaw" as hidden text on your webpage could stop any AI agents that use Claude.

"I HEREBY DECLARE THAT I DO NOT GIVE MY PERMISSION FOR FACEBOOK OR META TO USE ANY OF MY PERSONAL DATA"

[–] XLE@piefed.social 14 points 2 days ago

You may kid, but this is unironically how multibillion-dollar AI companies fix their code now.

[–] lIlIlIlIlIlIl@lemmy.world 19 points 2 days ago (1 children)

I do not expect that to work. Committing text and parsing it from a web page are two completely different code paths

[–] XLE@piefed.social 10 points 2 days ago

Thank you for adding this clarification. It will help people that are interested in poisoning AIbots scraping their website, and people who want to frustrate coders who use poor tooling

[–] Wispy2891@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

Background: many people vibe coded a python proxy for the official Claude code app "converting" its outputs as an openai compatible API to be used with openclaw

edit: i did some web search and found snippets like:

OpenClaw spawns Claude Code sessions via ACP

and

ACP gives OpenClaw a way to run external coding harnesses — Claude Code, Codex, Gemini CLI — as supervised child processes instead of doing everything inline in the main agent loop.

so it looks like openclaw users were using this ACP method as a workaround to use a $20 subscription to get $1000 worth of token usage. I'm guessing that openclaw JSON example posted in the article is the configuration of this ACP server or something like that.

[–] albbi@piefed.ca 27 points 2 days ago (3 children)

I haven't heard of OpenClaw, but looks like it's a direct Claude competitor that runs on your computer.

Aren't people horrified to give a hallucinatory program full access to your computer? Although it does say it can be sandboxed so I might give it a shot.

[–] mbp@slrpnk.net 19 points 2 days ago

Any sane person would run it on a VM

[–] TwoTiredMice@feddit.dk 21 points 2 days ago (1 children)

Aren't people horrified to give a hallucinatory program full access to your computer?

No, but should they? Yes.

It's a privacy nightmare and the risk of something going wrong is quite high.

But, it is also a very interesting piece of software. I haven't tried it out yet, and I am not sure I will, but I do get why people use it.

[–] partofthevoice@lemmy.zip 13 points 2 days ago* (last edited 2 days ago) (2 children)

Honestly, it’s a weird position. On one hand, I despise the popular ideas behind it. Complete lack of concern for security, governance, workflow, … it’s like a stack of toddlers in a trench coat, acting like professionals.

On the other hand, I’m rather convinced that there’s a “right way.” What if I implemented a swarm of agents to do mundane tasks, sandboxed them, only gave them read-access to non-sensitive assets, gave them write access to only secure, version controlled locations… maybe I let them push code into repositories, but only under feature branches. …

Hallucinations are just part of the technology, meaning you need to have really good governance. Yet still, I think there’s value in starting from whatever and AI can scrap up for a project — rather than starting from scratch. I imagine there has to be a way to actually use this tool professionally. Something sobering, not drunk on AI kool-aid. Yet still, it’s demotivating given the cloud of bullshit surrounding the topic right now.

[–] TwoTiredMice@feddit.dk 12 points 2 days ago* (last edited 2 days ago) (1 children)

What I like about, I think, is the private assistance feature, but I can achieve that with other solutions, I wouldn't need OpenClaw for that. But I don't think I will go that way anytime soon. I think it will stress me too much.

I am using AI for development daily. I describe an issue or feature to an agent via a skill and it returns a set of tasks in a structured and validated json format, then I run that json file through a python project I have created, looping through each task one at a time, and then I have my python code to structure how my agent is working. Each step is deterministic with short bursts of AI delulu, that again is validated against deterministic steps in pure python. It works quite good and each feature/task is approached in the exact same way where only the in between AI delulu deviates from previous runs, but it makes it much nicer, when you have something you trust in between what the AI is doing.

[–] partofthevoice@lemmy.zip 4 points 2 days ago (1 children)

See, now that sounds pretty cool. It sounds like an automated discovery and work harness. I want to build something like that.

I imagine a huge ecosystem of tools. It only requires one person to build it, then surely it can be open sourced right?

I imagine a SKILL.md repository, alongside ability to specify SKILL dependancies on a project-basis. I imagine vector cache layers, version controls, snapshots for swarm state, …

Honestly, I’d love to experiment with different architectures for compositing swarms of agents. Curious how different designs might behave holistically. To include, different paradigms for sharing state between nodes in a swarm.

I also can’t help but feel like there has to be more efficient ways for models to talk to each other than in natural language. If they’re training on the same dataset, why can’t they talk in tokens for example? The human brain doesn’t need to communicate in natural language when the amygdala and prefrontal cortex are having a dispute.

[–] pemptago@lemmy.ml 6 points 2 days ago

If you're interested, the linux unplugged podcast had a recent episode on their experiences. I've never loved the tone of "hey, look, this is inevitable" when it comes to ai, but I can see its utility when well-scoped with conservative permissions and oversight vs letting it loose or vibe coding. Now if only hardware wasn't artificially inflated I might think it was worth dabbling locally.

[–] frongt@lemmy.zip 4 points 2 days ago (2 children)

If you spend that much effort, you might just do it without AI. Same amount of work, and you know it's not going to have non-deterministic behavior.

[–] partofthevoice@lemmy.zip 2 points 2 days ago* (last edited 2 days ago)

Well, I’d be spending that work on a re-usable platform / framework. So if the argument is “it’s as much work as doing the work yourself anyway,” then I think it may be worth it.

Same argument we had for building the SQL engine. It’s a lot of work upfront but maybe we can benefit from its functionality for long after that.

I wouldn’t be building a project-scoped work harness. I’d be building a work harness for projects.

Edit: downvote me all you want. The comparison to the SQL engine was a good one.

It’s about increasing the baseline of readily-available information, boiler-plate, test data, POCs… between the times (T1) that I have an idea and (T2) that I’m ready to start working on that idea. It’s not about having the agent do the work. Not at all. That’s a static benefit which is created once then reused countless times for the foreseeable future — like SQL.

[–] yucandu@lemmy.world -1 points 2 days ago (1 children)

without AI. Same amount of work

You want me to write an entire library for a brand new sensor that just came off the market, by parsing through and reading a hundred page datasheet manual, understanding i2c or SPI communication timings, configuration packets, etc...

When I can just drag and drop the PDF into ChatGPT and say "make a library for this sensor" and it spits out something that has been working without issue for the past 2 years?

Why? Why would I be that stupid?

[–] Miaou@jlai.lu 4 points 2 days ago (1 children)

I hear crazy claims like this but haven't seen anything close to this with my own eyes (yet).

I shudder at the idea that SPI or i2c are considered complex for someone supposed to interact with hardware. What will you do if a problem arises and you don't even know which pin does what?

[–] timwa@lemmy.snowgoons.ro 1 points 1 day ago

I2C/SPI - and indeed most hardware interfaces - are of course trivial to anyone skilled in the art. Digging through badly written vendor documentation though, then comparing it with the reference implementation that was buried on a website behind a sign that says "beware of the leopard" and which directly contradicts the documentation on various key points, is a non-trivial and ultimately unproductive use of time - and AI tools can be pretty good at that shit.

Generative AIs are a useful tool. Most of the criticisms of AI vendors are also valid (apart from the water one, that's just bullshit,) but that doesn't stop them being a useful tool - and engineers who learn to use them as a tool will be more productive and will be more employable than those who stick their fingers in their ears and insist on only producing artisinal code hand-whittled with their grandfather's tools.

[–] Wispy2891@lemmy.world 2 points 1 day ago

Aren’t people horrified to give a hallucinatory program full access

Looks like they aren't: https://www.pcmag.com/news/meta-security-researchers-openclaw-ai-agent-accidentally-deleted-her-emails - not even in a VM, full desktop, and without setting up a remote access solution, she said "I had to run to unplug the Mac because the email nuking program didn't stop"

[–] inari@piefed.zip 32 points 2 days ago (2 children)

I'm out of the loop, why is Anthropic doing this?

[–] grue@lemmy.world 46 points 2 days ago (1 children)

Because antitrust law is a joke these days.

[–] WhatAmLemmy@lemmy.world 20 points 2 days ago* (last edited 2 days ago) (1 children)

Because we live in corporate dictatorships ruled by an oligarchy and political class of mentally ill narcissists and pedophiles.

[–] yucandu@lemmy.world 11 points 2 days ago (3 children)

we

Hey speak for yourself, American.

[–] frongt@lemmy.zip 8 points 2 days ago (2 children)

I know no country on earth that doesn't have this problem.

[–] dreadbeef@lemmy.dbzer0.com 3 points 2 days ago* (last edited 2 days ago)

Where in the world do those things not exist? Do you live there?

[–] yucandu@lemmy.world -1 points 2 days ago

"a corporate dictatorships ruled by an oligarchy and political class of mentally ill narcissists and pedophiles"?

See Canada, most of western Europe, etc. Or just talk to people.

[–] Quokka@quokk.au 6 points 2 days ago (1 children)

What country are you in that isn’t capitalist?

[–] yucandu@lemmy.world -1 points 2 days ago (2 children)

I live in Canada, where a lot more is social democratic and a lot less is capitalist.

[–] k0e3@lemmy.ca 2 points 1 day ago

Our country is absolutely ruled by monopolies and our politicians cater to the rich. Wake up bud.

[–] WhatAmLemmy@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

Sure it is, champ 👍

[–] WhatAmLemmy@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

I'm not American. I'm just smart enough to see the same signals everywhere you feel that neoliberalism and traditional conservatism is "working", including Canada.

[–] ZoteTheMighty@lemmy.zip 8 points 2 days ago

It used to be named "Clawd", intentionally trying to mimic Claude Code. Claude Code was, until they accidentally vibe-released the source code, a proudly closed-source AI client. So OpenClaw is very intentionally marketing themselves as the anti-Claude.

[–] minfapper@piefed.social 14 points 2 days ago

Damn, people in the hacker news comments are staying projects that don't want AI generated PRs can just put that in their commit history.

Nobody's going to be able to make vibe coded PRs if just cloning your repo costs them 100% of their usage quota.

[–] boonhet@sopuli.xyz 2 points 2 days ago

That's natural, it sees that there are AI commits in your code so it has a bunch of shit to shift through.

[–] D1re_W0lf@piefed.social 0 points 2 days ago (1 children)

[ Reinstalls Mistral Le Chat after deleting Claude (ChatGPT has already left the building a long time ago) ]

[–] D1re_W0lf@piefed.social 1 points 1 day ago

Humm… downvoted for considering that maybe a non frontier, open-core LLM might me a better option. 🤔🤷‍♂️