t3rmit3

joined 2 years ago
[–] t3rmit3@beehaw.org 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

AI is non-deterministic, sure

This is incorrect. They are in fact completely deterministic. Studies have proven that when all inputs, weights, and precision values like temperature are static, they produce the exact same token sequences (outputs). The appearance of non-determinism is a result of pseudo-randomized (another thing which is deterministic but appears otherwise) values and user ignorance (in the technical sense, not the value-judgement sense). In fact, the process of 'tuning' LLMs is heavily focused on adjusting input values to surface preferred outputs, which would not work in a non-deterministic system.

When I type “ls” I’m pretty fucking sure I’m not going to get “rm” style results.

Yes, but we don't trust humans not to rm what they shouldn't either, which is why the --no-preserve-root flag exists. ls is not supposed to perform write actions. Agentic LLMs are. And just like you wouldn't build and test on your production server in case the code you execute has an unexpected adverse effect, you shouldn't be running LLM agents in a location or way that the actions it performs has an unexpected adverse effect either. The genre of jokes about a new employee bringing down Prod or deleting source code is older than most people (which to be fair, given that the median age is 31, is true for a lot of things).

LLMs are just a class of software. They're not good or bad any more than a hammer is good or bad (and can also be used to build or to destroy).

The problem isn't LLMs, it's the entities who control the most powerful ones (corporations and governments), and what those entities are doing with them; using them as weapons against us, rather than as tools to aid us.

[–] t3rmit3@beehaw.org 6 points 18 hours ago* (last edited 18 hours ago) (1 children)

As usual, politicians trying to use children and fear as a wedge to get people to accept government surveillance and control.