World News
A community for discussing events around the World
Rules:
-
Rule 1: posts have the following requirements:
- Post news articles only
- Video links are NOT articles and will be removed.
- Title must match the article headline
- Not United States Internal News
- Recent (Past 30 Days)
- Screenshots/links to other social media sites (Twitter/X/Facebook/Youtube/reddit, etc.) are explicitly forbidden, as are link shorteners.
-
Rule 2: Do not copy the entire article into your post. The key points in 1-2 paragraphs is allowed (even encouraged!), but large segments of articles posted in the body will result in the post being removed. If you have to stop and think "Is this fair use?", it probably isn't. Archive links, especially the ones created on link submission, are absolutely allowed but those that avoid paywalls are not.
-
Rule 3: Opinions articles, or Articles based on misinformation/propaganda may be removed.
-
Rule 4: Posts or comments that are homophobic, transphobic, racist, sexist, anti-religious, or ableist will be removed. “Ironic” prejudice is just prejudiced.
-
Posts and comments must abide by the lemmy.world terms of service UPDATED AS OF OCTOBER 19 2025
-
Rule 5: Keep it civil. It's OK to say the subject of an article is behaving like a (pejorative, pejorative). It's NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
-
Rule 6: Memes, spam, other low effort posting, reposts, misinformation, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.
-
Rule 7: We didn't USED to need a rule about how many posts one could make in a day, then someone posted NINETEEN articles in a single day. Not comments, FULL ARTICLES. If you're posting more than say, 10 or so, consider going outside and touching grass. We reserve the right to limit over-posting so a single user does not dominate the front page.
We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.
All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.
Lemmy World Partners
News !news@lemmy.world
Politics !politics@lemmy.world
World Politics !globalpolitics@lemmy.world
Recommendations
For Firefox users, there is media bias / propaganda / fact check plugin.
https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/
- Consider including the article’s mediabiasfactcheck.com/ link
view the rest of the comments
If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).
Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
Ah, but if there's no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.
Biologically, there's an element of randomness to neurons firing. If they fire too randomly, that's a seizure. If they don't ever fire spontaneously, you're in a coma. How they produce ideas is nowhere close to being understood, but there's going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.
It does seem to be dead-ending as a technology, although the definition of "mind" is, as ever, very slippery.
The big AI/AGI research trend is "neuro-symbolic reasoning", which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.
I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.
Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is usually detrimental to it).
For LLMs, the opposite is true.
Actually, it seems pretty likely randomness is a central part of a human coming up with an idea.
Consider the following question: “why did you write something sad?”
Maybe the sadness is random. (That’s depression for you.) But it doesn’t change the fact that the subjective nature of sadness fuels creative decisions. It is why characters in a novel do so and so, and why their feelings are described in a way that is original and yet eerily familiar — i.e., creatively.
So, here’s how I understand this claim. Either
(1) means randomness is background noise cancelled out at scale. We would still ask why some people are more creative than others, (or why some planets are redshifted compared to others) and presumably we have more to say than “luck,” since the chances that Shakespeare wrote his plays at random is 0.
Interpretation (2) suggests that creativity doesn’t exist and this whole conversation is senseless.
Well, what is creativity? Does it have to be transcendent? Or does it just mean original and useful or coherent, like in the paper? If it's the latter, a collection of cells can be creative, and an extremely large mathematical system embodied in a GPU could also, potentially, be creative. It's just a matter of being able to reach the creative concept (probably somewhat randomly), without outputting incoherent garbage first.
Isn't that what coming up with an idea feels like? Wandering through the space of concepts until everything clicks together all the sudden?
This goes towards answering your other reply, too. I have no idea what it's "like" to be an LLM, and how much it differs from "being" nothing, but if experience (for the sake of argument) is necessary to output decent art, then isn't an AI replacing artists evidence it has an experience? That is something that has empirically happened, at least for some kinds of artists and to some degree.
I can only speak about the literary world, and I was quite sanguine about ChatGPT in the early days, before I learned about how LLMs actually work. Having experimented with these tools extensively, I am certain that not a single page of good fiction has ever been produced by these statistical models. Their banality is almost uncanny — unless you know how they work, in which case it makes sense.
Now to be fair, fewer than 1 in 100 people can write fiction well, and fewer than 1 in 10,000 can do it at a level I’d consider “art” (as opposed to amateur dabbling).
LLMs are limited by the mathematics of their design. They’re just tracking weighted averages about what word comes next. That’s why they’re so good at corpospeak and technical writing, and so utterly worthless and cringey at writing fiction (or “art”).
Sure. And a hundred monkeys with typewriters could reproduce the works of Shakespeare. Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.
The same thing that it’s “like” to be a fax machine. They’re not significantly different, and you can literally program an LLM inside a fax machine if you wanted to.
Anyway, leaving you with the thought that you can’t compare “a collection of cells” to digital computers for two reasons.
Cellular activity is the domain of biologists, who do not study creativity or art. We have absolutely no idea how the tiny analog machinery of multicellular organisms give rise to consciousness.
Comparing digital stuff to analog stuff is a category error.
“If a collection of cells can be creative, why not a mathematical system in a GPU?”
“If a collection of cells can be creative, why not cheeseburgers?”
In both cases the answer is potato.
Biological neurons are actually more digital than artificial neural nets are. They fire with equal intensity, or don't fire (that at least is well understood). Meanwhile, a node in your LLM has an approximately continuous range of activations.
That's leaving out most of the actual complexity. There's gigabytes or terabytes of mysterious numbers playing off of each other to decide the probabilities of each word in an LLM, and it's looking at quite a bit of previous context. A human author also has to decide the next word to type repeatedly, so it doesn't really preclude much.
If you just go word-by-word or few-words-by-few-words straightforwardly, that's called a Markov chain, and they rarely get basic grammar right.
Sure, we agree on that. Where we maybe disagree is on whether humans experience the same kind of tradeoff. And then we got a bit into unrelated philosophy of mind.
Absolutely, although it'd have to be more of an SLM to fit. You don't think the exact hardware used is important though, do you? Our own brains don't exactly look like much.
There are three types of computers.
Digital means reducible to a Turing machine. Analog, which includes things like flowers and cats, means irreducible by definition. (Otherwise, they would be digital.)
Brains are analog computers (maybe with some quantum components we don’t understand).
Making a mathematical model of an analog computer is like taking a digital picture of a flower. That picture is not the same as the flower. It won’t work the same way. It will not produce nectar, for instance, or perform photosynthesis.
Everything about how a neuron works is completely undigitizable. There’s integration at the axon hillock; there are gooey vesicles full of neurotransmitters whose expression is chemically mediated, dumped into a synaptic cleft of constantly variegated width and brownian motion to activate receptors whose binding affinity isn’t even consistent. The best we can do is build mathematical models that sort of predict what happens next on average.
These crude neural maps are not themselves engaged in brain activity — the map is not the territory.
Idk where you got the idea that neurons can be digitized, but someone lied to you.
I'm not trying to be cheeky or dismissive, but: https://en.wikipedia.org/wiki/Analog_signal
It's not about irreducibility - that's not a feature any part of physics has. Even quantum states can be fully simulated by a digital computer, just with prohibitive (ie. exponential in qubits) overhead. It's about continuous vs. discrete, and a very large number of discrete states can become indistinguishable from continuousness. Sometimes provably.
It's true that the internal functions the determine whether neurons fire are poorly understood. Once we have that data it will absolutely be possible to simulate, though. It's long been done for individual organoids, and at this point the hardware has scaled enough to look at doing an entire bacterium and it's nearby environment. If the interactions of a random patch of water molecules can be neglected - and usually biochemists do so - that software could be made much much lighter yet.
I'd like to point out Earth's weather systems are continuous, bigger and far more chaotic. If biology was irreducible, meteorology would be as well.
I explicitly explained that you can model an analog machine using a digital computer. When you make a topological map of a weather system (or a brain) or take a digital picture of a flower, you are generating a model. This is the subject of the articles you linked me.
No matter how accurate your digital model of a weather system, however, it will never produce rain. The byproduct of Turing machines (digital models) is strictly discrete.
You can model digital computers using analog computers. And the reverse is also possible. But digital systems are substrate-independent, whereas analog systems are substrate-dependent. They’re fundamentally inextricable from the stuff of which they’re made.
On the other hand, digital models aren’t made of stuff. They’re abstract. You can certainly instantiate a digital model within a physical substrate (silicon chips), the way you can print a picture of an engine on a piece of paper, but it won’t produce torque like an actual engine let alone rain like an actual weather system.
On a separate note, you reallllly need to acquaint yourself with Complexity Theory, if you actually believe our models will ever be anything other than decent estimates.
To learn more, please take a Theoretical Computer Science course.
Correct. It’s theoretical computer science. Again, analog systems are irreducible to digital ones by definition. They can only be modeled (functionally and crudely).
If how exactly it's implemented matters, regardless of similarity in internal dynamics and states, and there's an imminent tangibility to it like rain or torque, I think you're actually talking about a soul.
Behaviorally, analog systems are not substrate dependent. The same second-order differential equations describes RLC circuits, audio resonators and a ball on a spring, for example.
Analog AI chips exist, FWIW.
If you're looking at complexity theory, I'm pretty sure all physics is in EXPTIME. That's a strong class, which is why we haven't solved every problem, but it's still digital and there's stronger ones that can come up, like with Presburger arithmetic. Weird fundamentally-continuous problems exist, and there was a pretty significant result in theoretical quantum computer science about it this decade, but actual known physics is very "nice" in a lot of ways. And yes, that includes having numerical approximations to an arbitrary degree of precision.
To be clear, there's still a lot of problems with the technology, even if it can replace a graphics designer. Your screenshot is a great example of hallucination (particularly the bit about practical situations), or just echoing back a sentiment that was given.
This is partly true, as I already explained at length, since the behavior of any system can be crudely modeled. It’s how LLMs work! But it’s also a non-sequitur.
Modeling what a system can do and doing what a system can do are not the same.
What's the difference?
A map isn't a territory, but there's no such a thing as a tangible mind you can hold (or there is, and we're arguing about mysticism, which isn't really a good use of my time). As far as I can see, it's all maps.