spit_evil_olive_tips

joined 2 years ago
[–] spit_evil_olive_tips@beehaw.org 12 points 1 day ago (1 children)

Anthropic: Claude learned

yeah I'm gonna stop you right there

[–] spit_evil_olive_tips@beehaw.org 1 points 5 days ago (1 children)

"schools have a bunch of structural problems that should be fixed" - yes, agreed 1000%

"schools have a bunch of structural problems that should be fixed, and therefore schools shouldn't ban phones until the structural problems are fixed" - nope. that's a complete non-sequitur.

"fix structural problems with schools" is a gigantic undertaking. it's absolutely worth doing, but it's the kind of thing that will take many many years, and effort across many many different fronts. it's not like Congress can pass the Fix Structural Problems In Schools Act of 2026 this summer and then starting this September schools are now fixed.

"you can't do that small change until the all the larger problems are fixed" ends up being essentially a thought-terminating cliche.

[–] spit_evil_olive_tips@beehaw.org 7 points 6 days ago (1 children)

As an aside, is the Guardian becoming a shit rag? Lately (last year or two) I’ve noticed a huge dip in their quality.

what I've heard previously is that the Guardian's UK edition sucks, and that the US edition is somewhat better, but at this point I'm comfortable lumping them together.

the article that flipped the "assume everything they publish is bullshit" switch for me was Number of AI chatbots ignoring human instructions increasing, study says from a few months ago.

it's written with the tone you'd expect from "serious" journalism:

AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Security Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.

but if you read carefully...it's tweets. it's just fucking tweets. they released a "study" that is a graph of "tweets over time" and claimed that it says something about the prevalence of AI "going rogue".

and in particular, they take the one story about the Meta executive who allowed an AI "agent" to delete all their emails, notice that there's a bunch of tweets discussing it, and conflate that with an increased occurrence of it happening.

it's the equivalent of saying that there were 10,000 moon landings in 1969 because you looked back at newspaper archives and found 10,000 "man lands on moon" headlines. just complete fucking amateur hour data analysis, and for the Guardian to publish it uncritically is shameful.

[–] spit_evil_olive_tips@beehaw.org 34 points 1 week ago* (last edited 1 week ago) (6 children)

it gets even stupider than that:

We acknowledge funding from Arnold Ventures

an American company that is the philanthropic vehicle of billionaires John D. Arnold and Laura Arnold

who is this John Arnold guy anyway...let's see...and....oh

since February 2024, is a member of the board of directors of Meta.

oh, and fun fact, it's not even a real fucking charity:

The Laura and John Arnold Foundation was initially created as a philanthropic organization, but was restructured as a limited liability company and renamed Arnold Ventures in January 2019. The organization's LLC structure is intended to allow it to operate with more flexibility.

so he's on the board of directors for Meta, which among other things owns Instagram...and he has a side business that pretends to be a charity even though it's not, and it funds publication of a "study" saying no, teenagers having cell phones 24/7 is totally fine actually.

the tobacco industry used to pay people to wear white lab coats and say cigarettes didn't cause cancer. it's tempting to think of ourselves as more savvy than they were, and look back in hindsight and say "how could people have fallen for such obvious bullshit?"

well...