XLE

joined 10 months ago
[–] XLE@piefed.social 7 points 2 months ago

Could AI blow up the world tomorrow? Who knows! The future is unpredictable, so it's basically a 50-50, right? /s

[–] XLE@piefed.social 3 points 2 months ago

I think the article title is technically correct. They walked back the fears that users had. You're right that they haven't changed their policy, and they've exposed themselves as being extra creepy to someone like us, but we aren't the majority of Discord users.

I imagine the average Discord user, if they were even aware of this change at all, is breathing a sigh of relief right now, until these changes (or changes like them) actually affect those people

[–] XLE@piefed.social 64 points 2 months ago* (last edited 2 months ago) (9 children)

The messaging experience between Discord and Element is night and day. On Discord, I open the app, go to a server, and can see all the rooms and all the messages almost instantly.

On Element (at least on Android), chats from different communities intermingle with my groups. I tapped on a large and slow-moving group, and watched messages slowly lurch into view as most of the messages were "join" and "leave" ones.


ETA: I tried Commet, and I'm happy to say that while it still has the loading issue and several problems typical to new apps, it does separate private group chats from ones linked to spaces!

[–] XLE@piefed.social 4 points 2 months ago

There are people who can't properly function without an llm.

And I feel sympathy towards all of them, except the ones who appeared on the Jimmy Fallon Show to promote helplessness as a lifestyle.

[–] XLE@piefed.social 1 points 2 months ago

There's something uniquely dystopian about people rushing out to buy a new computer that costs hundreds of dollars just to run an AI chatbot that could go out of style next week.

Granted, they're doing it so it doesn't mess up their local hardware, but why would you even have that risk on the same Wi-Fi network?

[–] XLE@piefed.social 11 points 2 months ago

It's a metaphor for the cooked humans that are spinning up super exploitable chatbots for it

[–] XLE@piefed.social 3 points 2 months ago (1 children)

There's a story about a guy who asked his LLM to remind him to do something in the morning, and it ended up burning quite a lot of money checking to see if daylight had broken once every 30 minutes with an unnecessary API call. Such is the supposed helpful assistant.

[–] XLE@piefed.social 2 points 2 months ago* (last edited 2 months ago)

The blog (and its audio version, the Better Offline podcast) is very good indeed - somewhere between cathartic and revelatory at all times. I'm overjoyed to hear you liked it, but it sure can be wordy. I pay for the premium newsletter, but I find myself skimping on it... But re-reading and referring to chunks of it often.

[–] XLE@piefed.social 4 points 2 months ago (1 children)

It all reads like a giant racket. AI requires 32GB of RAM on your laptop, 32GB of RAM is expensive, so you have to lease, and it's expensive because AI requires RAM to run in the cloud. It's a problem in search of a solution, and it keeps making new problems along the way.

[–] XLE@piefed.social 3 points 2 months ago* (last edited 2 months ago) (5 children)

I think ~~I~~ you just described a conventional computer program. It would be easy to make that. It would be easy to debug if something was wrong. And it would be easy to read both the source code and the data that went into it. I've seen rudimentary symptom checkers online since forever, and compared to forms in doctors' offices, a digital one could actually expand to relevant sections.

Edit: you caught my typo

[–] XLE@piefed.social 4 points 2 months ago (2 children)

You can use zero randomization to get the same answer for the same input every time, but at that point you're sort of playing cat and mouse with a black box that's still giving you randomized answers. Even if you found a false positive or false negative, you can't really debug it out...

view more: ‹ prev next ›