this post was submitted on 28 Jun 2025
961 points (94.9% liked)

Technology

72414 readers
2624 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

(page 5) 50 comments
sorted by: hot top controversial new old
[–] Professorozone@lemmy.world 0 points 6 days ago (3 children)

I know it doesn't mean it's not dangerous, but this article made me feel better.

load more comments (3 replies)
[–] Imgonnatrythis@sh.itjust.works 62 points 1 week ago (3 children)

Good luck. Even David Attenborrough can't help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it's human nature for us to want to give just about every damn thing human qualities. I'd explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

[–] audaxdreik@pawb.social 27 points 1 week ago

This is the current problem with "misalignment". It's a real issue, but it's not "AI lying to prevent itself from being shut off" as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it's trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don't actually want to hear the truth. They want to hear what they want to hear.

LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I

[–] paraphrand@lemmy.world 13 points 1 week ago (1 children)

I’m still sad about that dot. 😥

load more comments (1 replies)
load more comments (1 replies)
[–] Geodad@lemmy.world 34 points 1 week ago (8 children)

I've never been fooled by their claims of it being intelligent.

Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

[–] kromem@lemmy.world 22 points 1 week ago (6 children)

It very much isn't and that's extremely technically wrong on many, many levels.

Yet still one of the higher up voted comments here.

Which says a lot.

[–] Hotzilla@sopuli.xyz -1 points 6 days ago* (last edited 6 days ago) (1 children)

Calling these new LLM's just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU's.

load more comments (1 replies)
load more comments (5 replies)
[–] anzo@programming.dev 15 points 1 week ago* (last edited 1 week ago)

I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)..

In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. ..

Also, Anthropic (ironically) has some nice paper(s) about the limits of "reasoning" in AI.

load more comments (6 replies)
[–] RalphWolf@lemmy.world 24 points 1 week ago (9 children)

Steve Gibson on his podcast, Security Now!, recently suggested that we should call it "Simulated Intelligence". I tend to agree.

load more comments (9 replies)
[–] bbb@sh.itjust.works 23 points 1 week ago (2 children)

This article is written in such a heavy ChatGPT style that it's hard to read. Asking a question and then immediately answering it? That's AI-speak.

[–] sobchak@programming.dev 19 points 1 week ago (1 children)

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

[–] bbb@sh.itjust.works 20 points 1 week ago* (last edited 1 week ago) (12 children)

"…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

load more comments (12 replies)
[–] JackbyDev@programming.dev 13 points 1 week ago

Asking a question and then immediately answering it? That's AI-speak.

HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

[–] Bogasse@lemmy.ml 21 points 1 week ago

The idea that RAGs "extend their memory" is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

[–] some_guy@lemmy.sdf.org 20 points 1 week ago (1 children)

People who don't like "AI" should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

[–] kibiz0r@midwest.social 19 points 1 week ago* (last edited 1 week ago) (1 children)

Citation Needed (by Molly White) also frequently bashes AI.

I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

load more comments (1 replies)
[–] aceshigh@lemmy.world 17 points 1 week ago* (last edited 6 days ago) (7 children)

I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

E: I use it to give me ideas that I then test out solo.

[–] Snapz@lemmy.world 30 points 1 week ago (2 children)

This is very interesting... because the general saying is that AI is convincing for non experts in the field it's speaking about. So in your specific case, you are actually saying that you aren't an expert on yourself, therefore the AI's assessment is convincing to you. Not trying to upset, it's genuinely fascinating how that theory is true here as well.

load more comments (2 replies)
load more comments (6 replies)
[–] psycho_driver@lemmy.world 14 points 1 week ago (1 children)

Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

[–] laranis@lemmy.zip 15 points 1 week ago (4 children)

Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

load more comments (4 replies)
[–] mechoman444@lemmy.world 12 points 1 week ago* (last edited 1 week ago) (39 children)

In that case let's stop calling it ai, because it isn't and use it's correct abbreviation: llm.

load more comments (39 replies)
load more comments
view more: ‹ prev next ›