this post was submitted on 12 May 2026
394 points (98.0% liked)

Lemmy Shitpost

39761 readers
4296 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
 

https://www.theguardian.com/technology/2026/may/05/richard-dawkins-ai-consciousness-anthropic-claude-openai-chatgpt

Video discussion of this event by Steve Shives (known for his star trek videos but also does politics) https://m.youtube.com/watch?v=6aMQAv-JYpk

you are viewing a single comment's thread
view the rest of the comments
[–] TargaryenTKE@lemmy.world 10 points 21 hours ago* (last edited 21 hours ago) (2 children)

Anyone who's even slightly interested in the idea of a Chinese Room (or just good sci-fi), PLEEEASE go out and read Blindsight by Peter Watts. Not only is it a phenomenal deep-dive into what consciousness even is, but it's got dozens of fantastic ideas in it that could make for compelling stories on their own. Also, scientifically-plausible vampires in space! That is all

[–] daannii@lemmy.world 8 points 20 hours ago* (last edited 20 hours ago) (3 children)

One of my top 5 books. It's also free to read online. https://www.rifters.com/real/Blindsight.htm

It in no way supports that LLMs can be sentient. And despite the arguments in the book that consciousness and awareness can be missing in an advanced species capable of space travel, I do not actually believe that's true. But I enjoy the argument and speculation.

The book is highly researched and even contains a reference list of legit research articles. However it is a book of fiction and the writer took artistic liberties when needed to make an interesting story over facts.

For instance. A brain cannot contain two or more personalities because a personality is a full brain deal.

But it's an interesting argument about cultural designations of what counts as mental illness.

Also the reason I do not think a space traveling species can exist without consciousness.

Because. Motivation.

It's that simple.

An organism can be shaped behaviorally by the environment. That's part of evolution. And this shaping can be unconscious.

But at a point, creative construction and ambition to exceed ones given optimal environment for a less optimal one (space) must be an intentional effort.

The scientific research and experimentation required to build complex machines requires a thinking and understanding mind. Because it requires critical thinking.

Critical thinking and creativity is a characteristic that requires a sense of self.

Even in our own history we see that it takes a specific type of person to pursue scholarly work. People who are less conformist are generally more capable of new inventions, research, and challenging acceptable beliefs of the mass. We never see the most rule following conformist being these people.

If everyone was like that, we wouldn't survive. So diversity of mental proclivities within a species is necessary for advancement. Otherwise optimal survival would be met and stagnate.

Think of the horseshoe crab as an example.

Furthermore , I am a researcher in perception. And the field of perception is often referenced for the exploration of what is consciousness.

There are many definitions. But the sense of self is one. And a popular one.

Higher complex perception creates a sense of self.

It's a product of the system.

The book does discuss this a bit.

I need to know my body and my actions are not the same as you. That you stand there and I stand over here.

I can perform an action and you can perform a different one that is unknown to me and not within my control.

This understanding of separateness. Of ",this is what I'm experiencing and where I am (spatially)" is something that would always emerge from higher perception. Such as that in most animals.

Maybe not in plants, fungi, bacteria, single cell microbes, etc.

But there are arguments and evidence for some of those examples as well.

As a final point. (I doubt anyone read all that).

Most people who think a probability model (current AI) is capable of consciousness usually have an incredibly simplified view of how the brain processes information.

They follow old school "behaviorist" perspectives. Or "the black box" perspective on brain functioning.

But a neuroscientist will tell you it's not simple at all. It's not info in, info out.

The system is changed, biologically, by the input.

The same input given twice will result in a different output the 2nd time.

And the 3rd. And how frequently the input is given or it's temporal relation to other stimuli will also change its output.

This is because the organic brain learns. And this learning is a biological change in the actual neural structures (connections) and neurons firing potential. Every single moment the brain is physically , biologically, changing.

Computations in the brain don't use actual math. It's all estimates (heuristics). And these are not well understood how these computations are made. They don't work as predicted.

There are always too many factors.

Individual motivations, including personality traits are also a factor in how the information is processed.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

https://en.wikipedia.org/wiki/Need_for_cognition

https://en.wikipedia.org/wiki/Gray%27s_biopsychological_theory_of_personality

https://en.wikipedia.org/wiki/Binding_problem

https://en.wikipedia.org/wiki/Neural_coding

https://en.wikipedia.org/wiki/Hebbian_theory

[–] bbb@sh.itjust.works 1 points 4 hours ago (1 children)

It's interesting that you point to https://en.wikipedia.org/wiki/Hard_problem_of_consciousness when the term was coined by David Chalmers, who published Could a Large Language Model be Conscious?. From the abstract:

I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.

So are we all just arguing about how likely it is, or are you arguing that current AI systems are definitely not conscious? If the latter, what do you think about the not-too-distant future ones?

But a neuroscientist will tell you it’s not simple at all. It’s not info in, info out.

The system is changed, biologically, by the input.

The same input given twice will result in a different output the 2nd time.

And the 3rd. And how frequently the input is given or it’s temporal relation to other stimuli will also change its output.

I thought online learning was possible with current LLMs, just not worth the cost. I mean, you can at least fine tune offline based on previous outputs and feedback, e.g. RLHF. I feel like maybe neither should count, but can't say why exactly. Not many end users bother with fine tuning anymore because there are usually more effective alternatives like RAG.

What do you think about agentic systems, i.e. running an LLM in a loop with a scratchpad and tools? They just write their "memories" into text files, but if you consider those text files part of the system, then the input does technically change the system. Of course, you could argue that doesn't count because it's no different to changing the input. So to count, it would have to store neuralese or a LoRA or something?

[–] daannii@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago)

Agenic systems are definitely more sophisticated but still just directed programming.

Humans do not learn like machines learn.

I've already explained that the exact same input , put in twice into a human will not result in the same exact output.

But it would for a model where nothing has changed.

I also gave links to the binding problem and biopsychology of personality and how traits change how information is processed in humans.

I didn't even go into neural noise or brain oscillations but that's a whole other factor for processing information.

Computers don't have any of that. They don't actually perceive or understand anything.

This is why a human can produce new problem solving solutions.

Apply things unrelated to new problems.

We can think outside the box without producing more nonsense than useful outputs.

Machines produce mostly nonsense when parameters are relaxed.

Also Chalmers is saying he thinks potentially in the future. Someone could create artificial intelligence and it may , in part use LLMs.

That's just him having an open mind about it.

I don't share his sentiments. But I admit I'm open to changing my mind if I see some very convincing evidence that works with current knowledge and theories of neuroscience.

Because I'm not convinced that something is sentient because "it looks real". Or "sounds like a person".

It has to function in ways that would lead to evolution outside of human intervention and control with systems that would create sense of self and understanding.

Mathematical formulas cannot do either of those things.

A program directed by code a human put in, cannot do those things.

Its like cgi. It can look very realistic. But it's not actually a real person.

Even when motion capture is used. It's still just a program mimicking human movements because someone (a human) told it to.

[–] HeyThisIsntTheYMCA@lemmy.world 4 points 17 hours ago

eeeee! thank you for the link! i have too much good stuff to read now, in part thanks to you and @TargaryenTKE@lemmy.world (thank you both so much! i might disappear for a week into books but i promise to pop in for air). If i didn't have a good choosing algorithm by now i'd be in analysis paralysis (for relatively trivial decisions: if you have multiple equally good options, flip a coin. use chwazi. roll a die. whatever works for that number. if, while doing the random number generator you find yourself hoping for a specific option, you know what you really want. if not, go with the random choice. you're equally happy with all of them so what do you care if you randomly go with number eight? go with number eight.) One of the best problems to have (too many good choices).

[–] TargaryenTKE@lemmy.world 2 points 17 hours ago (1 children)

Now what did you think of Echopraxia?

[–] daannii@lemmy.world 1 points 14 hours ago

I'll be honest, I've read Blindsight a few times and pretty sure only read echopraxia once. Like 10 years ago.

But I re-read the synopsis to refresh my memory.

I remember liking Blindsight more. But not why.

I'm also not sure which story elements I'm remembering came from which book.

Was the whole vampire arch and twist from book 1 or 2?

Can you remind me of a few specific points ? Maybe that will jog my memory. Or maybe I just need to re-read it.

[–] baller_w@lemmy.zip 4 points 20 hours ago (1 children)

Literally reading it now. I hit that section last night. I put the book down immediately and started reading about the Chinese Room.

[–] TargaryenTKE@lemmy.world 1 points 17 hours ago

I won't spoil shit, but you be sure to have fun with the rest of the book! It's uh... well it stuck with me for a while. Also be sure to give his other book in the series, Echopraxia, a look as well. In my opinion it wasn't quite as good but that's like comparing a 9 to an 8.9, they're both incredible