XLE

joined 10 months ago
[–] XLE@piefed.social 7 points 5 months ago

AI companies are definitely aware of the real risks. It's the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they'll put that money towards.

Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that's expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.

[–] XLE@piefed.social 56 points 5 months ago (1 children)

This is good writing.

In promoting their developer registration program, Google purports:

Our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.

We haven’t seen this recent analysis — or any other supporting evidence — but the “50 times” multiple does certainly sound like great cause for distress (even if it is a surprisingly round number). But given the recent news of “224 malicious apps removed from the Google Play Store after ad fraud campaign discovered”, we are left to wonder whether their energies might better be spent assessing and improving their own safeguards rather than casting vague disparagements against the software development communities that thrive outside their walled garden.

[–] XLE@piefed.social 33 points 5 months ago (2 children)

The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.

Paying yourself to promote your own product. Promising to fix vague "risks" that make the product sound more powerful than it is, with "fixes" that won't be measurable.

In other words, Sam is cutting a $25 billion check to himself.

[–] XLE@piefed.social 10 points 6 months ago

Mighty thoughtful of Jeff Bezos to award money to a project that coincidentally promotes AI, and puts his name in the same sentence as environmentalism.

Meanwhile, Jeff Bezos' dirty secret is the environmental harm he's causing, and intentionally covering up, while trying to greenwash it.

[–] XLE@piefed.social 3 points 6 months ago

Artificial intelligence has been something people have been sounding the alarm about since the 50s. We call it AGI now, since "AI" got ruined by marketers 60 years later.

We won't get there with transformer models, so what exactly do the people promoting them actually propose? It just makes the Big Tech companies look like they have a better product than they do.

[–] XLE@piefed.social 7 points 6 months ago

Sam Altman himself compared GPT-5 to the Manhattan Project.

The only difference is it's clearer to most (but definitely not all) people that he is promoting his product when he does it...

[–] XLE@piefed.social 6 points 6 months ago

Geoffrey Hinton, retired Google employee and paid AI conference speaker, has nothing bad to say about Google or AI relationship therapy.

[–] XLE@piefed.social 8 points 6 months ago (3 children)

Superintelligence — a hypothetical form of AI that surpasses human intelligence — has become a buzzword in the AI race between giants like Meta and OpenAI.

Thank you MSNBC for doing the bare minimum and reminding people that this is hypothetical (read: science fiction)

[–] XLE@piefed.social 4 points 6 months ago

The only browser that has a relatively unrestricted add-on ecosystem, and the best one capable of running mobile add-ons, shouldn't need to add this as a baked-in feature.

I'm pretty sure this feature classifies as bloat.

[–] XLE@piefed.social 12 points 6 months ago

The article is pretty clear that the issue is with the Android devices themselves, not with lazy users. There is no indication that a malicious app has these permissions.

[–] XLE@piefed.social 8 points 6 months ago

I saw the writing on the wall when they purchased ad company Anonym... There were other signs before that, but that was the most brazen.

[–] XLE@piefed.social 14 points 6 months ago (1 children)

IMO: This post serves as a great representation of one of the pitfalls of post deletion.

I don't see why conversations should become inaccessible just because either OP or a moderator decides they don't want to have the post up. There are plenty of different reasons to remove a post (and I'm totally okay with respecting the wishes of the person who made it to redact their username and content), but not that many reasons to nuke the stuff that is contained under it. On places like Reddit, this isn't the case. On places like Twitter or Mastodon, this would be nonsensical. And Lemmy is somewhat related to Mastodon.

view more: ‹ prev next ›