this post was submitted on 26 Oct 2025
377 points (90.9% liked)

Technology

76415 readers
3403 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I came across this article in another Lemmy community that dislikes AI. I'm reposting instead of cross posting so that we could have a conversation about how "work" might be changing with advancements in technology.

The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work "you and I do today" (including Altman himself), doesn't look like work.

The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

As humanity's core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn't seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they're made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

These days we have fewer bookkeepers - most companies don't need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn't have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

you are viewing a single comment's thread
view the rest of the comments
[–] MonkderVierte@lemmy.zip -2 points 4 days ago (1 children)

You missed the psychology part?

[–] jungle@lemmy.world 6 points 4 days ago (1 children)

No, I saw it, but I was replying to the "please stop calling it AI" part. This is a computer science term, not a psychology term. Psychologists have no business discussing what computer scientists call these systems

[–] MonkderVierte@lemmy.zip 0 points 4 days ago* (last edited 4 days ago) (2 children)

What do i even answer here...

Who talks even about computer scientists? It's the public and especially company bosses who get wrong expectations about "intelligence". It's about psychology, not about scientifically correct names.

[–] jungle@lemmy.world 7 points 4 days ago* (last edited 4 days ago) (1 children)

Ah, I see. We in the software industry are no longer allowed to use our own terms because outsiders co-opted them.

Noted.

[–] sugar_in_your_tea@sh.itjust.works 4 points 4 days ago* (last edited 4 days ago) (1 children)

The solution to the public misusing technical terms isn't to change the technical terms, but to educate the public. All of the following fall under AI:

  • pathing algorithms of computer opponents, but probably not the decisions that computer opponents make (i.e. who to attack; that's usually based on manually specified logic)
  • the speech to text your phone used before Gemeni or whatever it's called now on Android (Gemeni is also AI, just a different type of AI)
  • home camera systems that can detect people vs animals, and sometimes classify those animals by species
  • DDOS protection systems and load balancers for websites probably use some type of AI

AI is a broad field, and you probably interact with non-LLM variants every day, whether you notice or not. Here's a Wikipedia article that goes through a lot of it. LLMs/GPT are merely one small subfield in the larger field of AI.

I don't understand how people went from calling the computer player in their game "AI" (or even older, "CPU"), which nobody mistook for actual intelligence, to now people believing AI means something is sentient. Maybe it's because LLMs are more convincing since they do a much better job at languages, idk, but it's the same category of thing under the hood. ChatGPT isn't "thinking," and when it claims to "think," it's basically turning a prompt into a set of things to "think" about (basically generates and answers related prompts), and then uses that set of things in its context to provide an answer. It's not actually "thinking" as people do, it's merely following a set of statistically-motivated steps based on your prompt to generate a relevant answer. It's a lot more complex than that Warcraft 2 bot you played against as a kid, but it's still following steps a human designed, along with some statistical methods to adapt to things the developer didn't encounter.

[–] MangoCats@feddit.it 1 points 3 days ago (1 children)

The problem with AI in a "popular context" is that it has been a forever moving target. Old mechanical adding machines were better at correctly summing columns of numbers than humans, at the time they were considered a limited sort of artificial intelligence. All along the spectrum it continues. 5 years ago, image classifiers that can sit and watch video feeds 24-7, accurately identifying things that happen in the feed with better than human accuracy (accounting for human lack of attention, coffee breaks, distracting phone calls, etc.) those were amazing feats of AI - at the time, and now they're "just image classifiers" much as Alpha-Zero "just plays games."

[–] sugar_in_your_tea@sh.itjust.works 1 points 3 days ago (1 children)

The first was never "AI" in a CS context, and the second has always and will always be "AI" in a CS context. The definition has been pretty consistent since at least Alan Turing, if not earlier.

I don't know how to square that circle. To me it's pretty simple, a solution or approach is AI if it simulates (or creates) intelligence, and an intelligent system is one that uses data (learns) from its environment to achieve its goals. Anything from an A* pathiing algorithm to actual general AI are "AI," yet people assume the most sophisticated end of the spectrum.

[–] MangoCats@feddit.it 1 points 2 days ago

The first was never “AI” in a CS context

Mostly because CS didn't start talking about AI until after popular perception had pushed calculators into the "dumb automatons" category.

Image classifiers came after CS drew the "magic" line for what qualifies as AI, so CS has piles of academic literature talking about artificially intelligent image classification, but public perception moves on.

The definition has been pretty consistent since at least Alan Turing, if not earlier.

I think Turing already had adding machines before he developed his "test."

The current round of LLMs seem more than capable of passing the Turing test if they are configured to try to. In the 1980s, the Eliza chat program could pass the Turing test for three or four exchanges with most people. These past weeks, I have had extended technical conversations with LLMs and they exhibit sustained "average" knowledge of our topics of discussion. Not the brightest bulb on the tree, but they're widely read and can pretty much keep up with the average bear on the internet in terms of repeating what others have written.

Meanwhile, there's a virulent public perception backlash calling LLMs "dumb automatons." Personally, I don't care what the classification is. "AI" has been "5 years away from realization" my whole life, and I've worked with "near AI" tech all that time. The current round of tools have made an impressive leap in usefulness. Bob Cratchit would have said the same about an adding machine if Scrooge had given him one.