this post was submitted on 28 Jun 2025
961 points (94.9% liked)

Technology

72440 readers
2607 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

(page 7) 47 comments
sorted by: hot top controversial new old
[–] ShotDonkey@lemmy.world 0 points 1 week ago (4 children)

I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts: https://ai-2027.com/

load more comments (4 replies)
[–] A_norny_mousse@feddit.org -1 points 1 week ago* (last edited 1 week ago) (5 children)

Thank You! Yes!

So ... A-not-I? AD? What do we call it? LLM seems too specialised?

[–] JollyG@lemmy.world 1 points 1 week ago

Word guessing machine.

[–] lena@gregtech.eu 1 points 1 week ago* (last edited 1 week ago) (1 children)

AS - artificial stupidity

ASS - artificial super stupidity

[–] A_norny_mousse@feddit.org 1 points 1 week ago

Both are good 👍

load more comments (3 replies)
[–] hera@feddit.uk -2 points 1 week ago (49 children)

Philosophers are so desperate for humans to be special. How is outputting things based on things it has learned any different to what humans do?

We observe things, we learn things and when required we do or say things based on the things we observed and learned. That's exactly what the AI is doing.

I don't think we have achieved "AGI" but I do think this argument is stupid.

[–] NotASharkInAManSuit@lemmy.world -1 points 1 week ago* (last edited 1 week ago)

Pointing out that humans are not the same as a computer or piece of software on a fundamental level of form and function is hardly philosophical. It’s just basic awareness of what a person is and what a computer is. We can’t say at all for sure how things work in our brains and you are evangelizing that computers are capable of the exact same thing, but better, yet you accuse others of not understanding what they’re talking about?

load more comments (48 replies)
[–] postman@literature.cafe -3 points 1 week ago (3 children)

So many confident takes on AI by people who've never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

It's a bit jarring and frankly offensive to be lectured 'at' by people who are obviously on the first step of their journey into this space.

load more comments (3 replies)
[–] Buffalox@lemmy.world -3 points 1 week ago (6 children)

That headline is a straw man, and the article really argues on General AI, which also has consciousness.
The current state of AI is definitely intelligent, but it's not GAI.
Bullshit headline.

load more comments (6 replies)
load more comments
view more: ‹ prev next ›