BertramDitore

joined 2 years ago
[–] BertramDitore@lemm.ee 4 points 23 hours ago

In the early aughts, I asked my high school history teacher where he thought was the likeliest place for a global conflict to start. His answer was Kashmir.

[–] BertramDitore@lemm.ee 77 points 1 day ago (1 children)

This is state-sponsored terrorism. Absolutely despicably evil that anyone would wake up one day and think to themself “what should I do today….oh I know, let’s go traumatize some Palestinian kids by kidnapping and torturing them.” What the fuck.

What sets this government apart is the level of support and encouragement it provides to settlers, whether through supplying them with weapons or funding the creation of new outposts. This backing has enabled and emboldened settlers to carry out attacks on Palestinians, with the aim of displacing communities and annexing their land.

[–] BertramDitore@lemm.ee 16 points 3 days ago

Wow. That’s some seriously evil bullshit.

[–] BertramDitore@lemm.ee 2 points 1 week ago

Wow, that’s really impressive. I don’t really know why, but I want this.

Though I initially thought it was handheld, which would make their claim to be able to print on nearly any surface a little more believable. Still seems like incredible tech.

[–] BertramDitore@lemm.ee 1 points 2 weeks ago

don't think that FAANG companies realize how toxic their image is

Ain’t that the truth. Their behavior and the products they’ve been launching the last few years prove this to me. They’re completely out of touch with society and what consumers actually want. LLMs are another perfect example of that.

[–] BertramDitore@lemm.ee 30 points 4 weeks ago

Goldberg assumed it was fake, but he waited for the attacks to happen for confirmation, which they did. Perhaps people are a bit smarter than you think.

I did believe that I was the target of a disinformation campaign. And so when I got the text at 11:44 a.m. saying that at 1:30 p.m. bombs will start falling in Yemen, I thought to myself, 'Well, if there is actually an American attack at 1:30 p.m., then I'll know that that this that the Signal chat is real and I have to I have to consider my next steps.' "

[–] BertramDitore@lemm.ee 3 points 1 month ago

Casey Newton founded Platformer, after leaving The Verge around 5 years ago. But yeah, I used to listen to Hard Fork, his podcast with Kevin Roose, but I stopped because of how uncritically they cover AI and LLMs. It’s basically the only thing they cover, and yet they are quite gullible and not really realistic about the whole industry. They land some amazing interviews with key players, but never ask hard questions or dive nearly deep enough, so they end up sounding pretty fluffy as ass-kissy. I totally agree with Zitron’s take on their reporting. I constantly found myself wishing they were a lot more cynical and combative.

[–] BertramDitore@lemm.ee 9 points 1 month ago (1 children)

That’s an interesting article, but it was published in 2022, before LLMs were a thing on anyone’s radar. The results are still incredibly impressive without a doubt, but based on how the researchers explain it, it looks like it was accomplished using deep learning, which isn’t the same as LLMs. Though they’re not entirely unrelated.

Opaque and confusing terminology in this space also just makes it very difficult to determine who or which systems or technology are actually making these advancements. As far as I’m concerned none of this is actual AI, just very powerful algorithmic prediction models. So the claims that an AI system itself has made unique technological advancements, when they are incapable of independent creativity, to me proves that nearly all their touted benefits are still entirely hypothetical right now.

[–] BertramDitore@lemm.ee 23 points 1 month ago (4 children)

The article explains the problems in great detail.

Here’s just one small section of the text which describes some of them:

All of this certainly makes knowledge and literature more accessible, but it relies entirely on the people who create that knowledge and literature in the first place—that labor that takes time, expertise, and often money. Worse, generative-AI chatbots are presented as oracles that have “learned” from their training data and often don’t cite sources (or cite imaginary sources). This decontextualizes knowledge, prevents humans from collaborating, and makes it harder for writers and researchers to build a reputation and engage in healthy intellectual debate. Generative-AI companies say that their chatbots will themselves make scientific advancements, but those claims are purely hypothetical.

(I originally put this as a top-level comment, my bad.)

[–] BertramDitore@lemm.ee 2 points 1 month ago (1 children)

Your description of those desks totally knocked some of my old memories loose. I remember going to a friend’s house in the late 90s when the first smallish “all-in-one” PCs started coming on the market (before the iMac claimed that space in ‘98). They had their new all-in-one PC set up on a tiny desk in the hallway outside their office. It was there so everyone in the family could use it, but I remember being shocked at how small it was, and so impressed that it didn’t need the whole corner of a room.

view more: next ›