XLE

joined 10 months ago
[–] XLE@piefed.social 1 points 1 month ago (2 children)

To belabor the chess analogy: I would say a chessbot didn't work if it randomly caused pieces to appear. Or if it made exceedingly lousy moves. You'd apparently say it was working because it technically changed the board.

Literally nobody is saying the token predictor isn't predicting token. It's just predicting wrong token, which normal people call "not working," while tech evangelists prefer to call it "hallucination" or "misalignment" depending on the narrative they're aiming for.

[–] XLE@piefed.social 18 points 1 month ago (1 children)

I wanted to give her the benefit of the doubt because surely, I thought, a security researcher couldn't be that stupid. But no, she is more stupid than the title would suggest.

She followed the techbro trend of buying a brand new computer, a Mac Mini, just to run this garbage AI agent. People supposedly buy a second computer to keep the AI agent from destroying their primary computer... but then she hooked it up to her primary email inbox anyway.

While you shouldn't run this trash on your main computer, you can also spin up a remote VM on a cloud service for much less money. She should have known this. She should probably have been intimately familiar with the process.

The icing on the cake was she had no idea how to remotely shut down her Mac Mini. Or maybe forgot to enable the option. Yet another reason to use a remote VM.

[–] XLE@piefed.social 1 points 1 month ago (4 children)

If you can understand the sentence "AI doesn't work" is about LLMs, surely you can also understand that not working is synonymous for returning incorrect outputs.

I have literally no idea what else you'd be arguing. Its ability to generate words? Everybody knows it can do that

[–] XLE@piefed.social 4 points 1 month ago

Your motivations are self-evident, I'm just pointing them out because you are misrepresenting them here

[–] XLE@piefed.social 1 points 1 month ago (6 children)

You know exactly what we're talking about when we look at this article and say "AI doesn't work." If you want to feign outrage, save it for the tech companies that muddy the waters.

[–] XLE@piefed.social 5 points 1 month ago (1 children)

Do you not simply believe the assumption AI will be super powerful any day now, titled ~~"AI 2027"~~ "The 2028 Global Intelligence Crisis"?

It's gonna happen for real this time. ~~Cryptocurrency~~ ~~NFTs~~ AI and cryptocurrency will upend the market with how incredible they have been.

Then, agentic commerce, coupled with stablecoins, gets rid of transaction fees and upends the business models of payment processors like Mastercard and card-focused banks like American Express.

/s

[–] XLE@piefed.social 18 points 1 month ago

AI chatbots come with randomization enabled by default. Even if you completely disable it (as another reply mentions, "temperature" can be controlled), you can change a single letter and get a totally different and wrong result too. It's an unfixable "feature" of the chatbot system

[–] XLE@piefed.social 14 points 1 month ago

I wonder if this admin was named maxwellhill.

[–] XLE@piefed.social 5 points 1 month ago (2 children)

You're definitely running around Lemmy defending AI, Iconoclast... Might as well be honest about it

[–] XLE@piefed.social 3 points 1 month ago (3 children)

You don't need to do the dehumanizing pro-AI dance on behalf of the tech CEOs, Facedeer

[–] XLE@piefed.social 5 points 1 month ago

15 would make her a girl, not a woman

[–] XLE@piefed.social 18 points 1 month ago (7 children)

Sorry what? Tech billionaires don't have to enable the free speech of sexually harassing a child online.

And if your argument is that sexually harassing a child online is "free speech" - and that's the best argument you have - that's not a good argument.

view more: ‹ prev next ›