XLE

joined 10 months ago
[–] XLE@piefed.social 24 points 2 months ago (1 children)

It's accelerated: In 2001, technology companies were forced to collect user data and realized it could be a goldmine. Today, technology companies are being forced to collect people's IDs... I'm sure this will end up just fine.

[–] XLE@piefed.social 2 points 2 months ago

Oh, would you like to see something gross?

Brandon Wang's recent blog post, "A sane but extremely bull case on Clawdbot / OpenClaw"

You know it's bad when even Hacker News, a website funded by venture capital demon Mark Andreessen, calls him out:

Fine article but a very important fact comes in at the end — the author has a human personal assistant. It doesn't fundamentally change anything they wrote, but it shows how far out of the ordinary this person is. They were a Thiel Fellow in 2020 and graduated from Phillips Exeter, roughly the most elite high school in the US.

Other comments point out his opulence: hotels charging $850 a night, reservations at expensive bay area restaurants, buying $80 gloves, and typing in lowercase because "sam altman types like this, so this is what is cool to the agi believers."

[–] XLE@piefed.social 20 points 2 months ago

Something is fishy here.

Manifest v3 has hard limits, and the developer of uBlock Origin has documented issues with the supposedly "just fine" new APIs in AdBlock Plus:

uBO Lite reliably filters at browser launch, or when navigating to new webpages while its service worker is suspended. This can't be achieved without uBO Lite's declarative approach. Example: [video]

But has also said that updates to their filters depends on Google graciously allowing it:

There are no filter lists proper in uBOL. There are declarative rulesets and scripts which are the results of compiling filter lists when the extension package is generated. Those declarative rulesets and scripts are updated only when the extension itself updates.

In other words, you can either have a tool that blocks ads unreliably, or a tool that can only update ad-blocking rules if an ad company allows it.

There are also things that are objectively impossible to do with Manifest V3.

So consider me skeptical. Any perceived parity or improvement is due to competent developers, not due to a willingness to make manifest V3 good. I think I'll trust the people building adblock tech over a couple of university students.

(Copied from my original comment on an article that uses this as a source)

[–] XLE@piefed.social 17 points 2 months ago

What a disgusting philosophy to have towards others. Please keep it to yourself.

[–] XLE@piefed.social 67 points 2 months ago* (last edited 2 months ago) (8 children)

Is that seriously an "AI is like a child" poster made to motivate workers?

AI companies sure love to treat humans like machines, while humanizing machines.

[–] XLE@piefed.social 2 points 2 months ago* (last edited 2 months ago) (1 children)

The source for creating the model, the training data, is closed, locked, a heavily guarded corporate secret. But unlike code for software, this data might be illegally or unethically gained, and Mistral may be violating the law by not publishing some of it.

You can "read" the assembly language of a freeware EXE program just as easily as you can "read" the open model of a closed source LLM blob: not very easily. That's why companies freak out over potential hidden training data: the professionals developing these models are incapable of understanding them. (I shudder to imagine a world where architects could not read blueprints.)

[–] XLE@piefed.social 2 points 2 months ago (3 children)

For the purpose of simplification, calling it a closed as an executable is close enough. Or a closed-source freeware ROM that you can download and run on an emulator (since you can just download models and run them via ollama or something similar). Or a closed-source game that supports modding and extension like Minecraft. Or a closed-source DLL with documentation...

Anyway, the point is, it's closed. If it's not closed source, I'd beg you to link the source, both code and data, that compiles to the output.

[–] XLE@piefed.social 4 points 2 months ago

So they're basically following the early Elon Musk playbook: Look like the good guys, by being slightly less bad than your enemies.

I'd like to think society won't fall for the same trick again.

[–] XLE@piefed.social 4 points 2 months ago (7 children)

"Open weights" just means you can download the blob they output from their sources. So... Closed source, unless they open it.

Their terminology is just tricky marketing. It would be like calling a closed source program "open executable" or something...

[–] XLE@piefed.social 26 points 2 months ago (3 children)

"Malicious" keywords aren't exclusively the problem, as the LLM cannot differentiate between "malicious" and "benign". It's been trivially easy to intentionally or accidentally hide misinformation in LLMs for a while now. Since they're black boxes, it could be hard to identify. This is just a slightly more pointed example of data poisoning.

There is no threat to an LLM chatbot outputting text... unless that text is piped into something that can run commands. And who would be stupid enough to do that? Okay, besides vibe coders. And people dumb enough to use AI agents. And people rich enough to stupidly link those AI agents to their bank accounts.

[–] XLE@piefed.social 5 points 2 months ago (9 children)

How much does it cost to run a system that supports it, in 2026 hardware prices? 4B is not the biggest AI model number, but RAM and GPU prices are very daunting thanks to the company releasing this closed-source model

[–] XLE@piefed.social 2 points 2 months ago

I'm generally okay with a social media ban for children, as long as it doesn't come with extra surveillance as a result. You know, like how the US handled adult websites back before they got all Big Government about it.

With a little extra friction and with a little extra help for teachers and maybe social workers, it seems like a good idea.

view more: ‹ prev next ›