XLE

joined 10 months ago
[–] XLE@piefed.social 48 points 2 months ago (2 children)

Well, it's Hacker News. A place run by Mark Andreesen and has huge incentives, especially on the personal level, to worship AI.

I'm surprised by the comments that are smart enough to push back against the corporate line.

[–] XLE@piefed.social 52 points 2 months ago

I'm really glad to hear that.

Funny that the top comment is a pro AI guy trying to cancel the people who are criticizing it for not being part of the project, when AI by definition does not give care about, or understand, any code it creates.

[–] XLE@piefed.social 1 points 2 months ago (2 children)

Are you saying you have specific evidence of this (then please do show exactly how AI will do something people haven't already), or are you saying "potential" because you don't?

[–] XLE@piefed.social 1 points 2 months ago (4 children)

It's a 1:1 correlation. Are you not familiar with any of the age-old cautionary tales about them?

https://youtu.be/ajBrcoEQauU

[–] XLE@piefed.social 1 points 2 months ago* (last edited 2 months ago) (1 children)

There is nothing "aligned" or "misaligned" about this. If this isn't a troll or a carefully coordinated PR stunt, then the chatbot-hooked-to-a-command-line is doing exactly what Anthropic told it to do: predicting next word. That is it. That is all it will ever do.

Anthropic benefits from fear drummed up by this blog post, so if you really want to stick it to these genuinely evil companies run by horrible, misanthropic people, I will totally stand beside you if you call for them to be shuttered and for their CEOs to be publicly mocked, etc.

[–] XLE@piefed.social 2 points 2 months ago (6 children)

They're called rumors

[–] XLE@piefed.social 30 points 2 months ago (1 children)

According to the article, the lack of sandboxing is intentional on Anthropic's part. Anthropic also fails to realistically communicate how to use their product.

This is Anthropic's fault.

[–] XLE@piefed.social 2 points 2 months ago

If I remember correctly, Elon Musk promised he would pay for lawsuits against companies that did this.

[–] XLE@piefed.social 2 points 2 months ago

ollama itself is safe the same way VLC Media Player is safe. You just load a model like an MP4. I don't think it's uniquely vulnerable to anything, as it just spits out text.

Now the real trouble comes when people decide to connect it to a command line...

[–] XLE@piefed.social 1 points 2 months ago* (last edited 2 months ago)

Voting with our wallets is unfortunately not a great strategy, as most of these companies have realized that they don't need to bother selling us things anymore. But it can be one of a toolbox... because it works sometimes. We do have other options, including shouting criticism from the rooftops, demonizing the CEOs behind this, and perhaps even voting on ballots.

[–] XLE@piefed.social 4 points 2 months ago

It's very strange to me how so many puff pieces got written about it in so little time. At least they're co-opting quotes and articles that raise legitimate issues, but it's kind of redundant.

It's also striking that the movement bears the name and logo of ChatGPT while telling us it is bad. It almost looks like an ad. Especially now, when many advertisements take advantage of cynicism. (I've seen speculation that the AI "Friend" pin intentionally used posters with empty white backgrounds to encourage graffiti or perhaps they added it themselves.)

[–] XLE@piefed.social 13 points 2 months ago

Yes, apparently. Then sent emails back and forth, verifying they were about to commit a crime, and then they did it

view more: ‹ prev next ›