this post was submitted on 02 Dec 2025
490 points (96.1% liked)

World News

51079 readers
1390 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ameancow@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (1 children)

Just a fun reminder how we make AI.

We take what is essentially trillions and trillions of "dials" that turn between "this is right/this is wrong" and set them up to compare yuuuuuge sets of data, from pictures to books to vast collections of human chatter and experiences, and we feed that into the data with some big sets of instructions ("this is what a cat looks like, this is not") and then we feed the whole thing the power equivalent of a small city... FOR A YEAR STRAIGHT. We just let it cook. It grows slowly, flipping all these trillions of dials over and over until it works out all the relationships between all this data. At the end of this period, the machine can talk. We don't fully understand why.

We don't program the shit, we don't write hard code to make it comply with Asimovian commandments. We just grow it like a tree and after it's grown there's not a lot we can do to change its structure. The tree is vast. So vast are its limbs and branches that nobody can possibly map it out and engineer ways to alter it. We can wrap new things around it, we can alter it's desired outcomes and output, but whatever we baked into it will always be there.

This is why they behave so weird, this is why they will say "I promise to behave" and then drive someone to suicide. This is why whenever Elon tries to make Grok behave in a way that pleases him, it just leads to more problems and unexpected nonsense.

This is why we need to stop AI from taking over our decision making. This is why we can't allow police, military and governments to hand over control of life-and-death decision making to these things.

[–] wewbull@feddit.uk 2 points 1 day ago* (last edited 1 day ago) (1 children)

The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and "we don't fully understand why".

The choice of training data is key to how the final model operates. All sorts of depraved material must be being used as part of the training set, otherwise the model wouldn't be able to generate the text it does (even if it's being coached).

It's clear the "AI race" is all about who gets the power of owning, and therefore influencing, everybody's information stream. If they couldn't influence it, there wouldn't be such a race.

[–] ameancow@lemmy.world 3 points 1 day ago

The problem I have with your description is that it abdicates responsibility for what eventually gets generated with a big shrug and “we don’t fully understand why”.

I'm not sure how it does that, I said that the instructions during that training dictate what kind of AI it will be, and the effects of wrapping new instructions around it have profound and unpredictable results, which I tried to describe.

Nothing I said could imply that there's no human involvement in the creation of an AI. My point was just a lot broader, which is that the things are made by people using vast resources for unpredictable results and people are trying to make them power everything.

A racist chat LLM is bad. A generalized AI with access to the power grid, defense systems and drone targeting systems which is built on a model that Elon Musk has made or fucked around with is much, MUCH worse.