chemical_cutthroat

joined 2 years ago
[–] chemical_cutthroat@lemmy.world 27 points 5 days ago (4 children)

I literally just blocked an NSFW lemmit of misogynygonewild and it blew my mind it was even a thing. Fucking trash.

That's what I tell my partners. They are, thus far, unimpressed.

[–] chemical_cutthroat@lemmy.world 67 points 1 week ago (6 children)

I'm sorry. Weren't the winter Olympics just in China? Yeah, the same China with an active genocide? And the world cup is still run by... Checks notes... FIFA. So, umm, yeah...

[–] chemical_cutthroat@lemmy.world 4 points 3 weeks ago (4 children)

They give the Miyazaki quote and then say, "of course, he wasn't talking about generative AI, but he could have been."

[–] chemical_cutthroat@lemmy.world 54 points 3 weeks ago (7 children)

What kind of article is this? They misattributed a quote, then admitted the misattributed the quote, then doubled down on it, and then threw in a political message.

People, this is rage bait. It's yellow journalism. Don't fall for this shit.

[–] chemical_cutthroat@lemmy.world 50 points 3 weeks ago (1 children)

And all of them sharing a single 26k connection, too

[–] chemical_cutthroat@lemmy.world 3 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Is that any different than a human moral code? We like to think we have some higher sense of "truth" but in reality we are only parroting that "facts" we hold as true. Through our history we have professed many things as truth. My favorite fact that I just learned yesterday is that we didn't discover oxygen until after the founding of the United States. Are the humans before 1776 any less human than us? Or are they trained on a limited data set, telling people that the "miasma" is the cause of all their woes?

[–] chemical_cutthroat@lemmy.world 3 points 4 weeks ago (3 children)

But, like a human, it mostly tries to stick to the truth. It does get things wrong, and in that way is more like a 5 year old, because it won't understand that it is fabricating things, but there is a moral code that they are programmed with, and they do mostly stick to it.

To write off an LLM as a glorified chatbot is disingenuous. They are capable of produce everything that a human is capable of, but in a different ratio. Instead of learning everything slowly over time and forming opinions based on experience, they are given all of the knowledge of humankind and told to sort it out themselves. Like a 5 year old with an encyclopedia set, they are gonna make some mistakes.

Our problem is that we haven't found the right ratios for them. We aren't specializing the LLMs enough to make sure they have a limited enough library to pull from. If we made the datasets smaller and didn't force them into "chatbot" roles where they are given carte Blanche to say whatever they say, LLMs would be in a much better state than they currently are.

[–] chemical_cutthroat@lemmy.world 96 points 4 weeks ago (1 children)

It was discontinued in 2011. Anything that is out there today is outdated at best, and malicious at worst.

[–] chemical_cutthroat@lemmy.world 58 points 1 month ago* (last edited 1 month ago)

So, the original article from Heatmap was a much better written piece. Whatever the fuck this is, is trash. Pure yellow journalism. So, yeah, if you read this, and feel angry, that's the point. It's click bait. Read the Heatmap article if you want a better idea of what's actually happening.

https://heatmap.news/climate/breakthrough-energy-layoffs

Roll backwards into the person behind you to establish manual dominance.

view more: next ›