this post was submitted on 04 Feb 2026
567 points (96.9% liked)

Technology

80478 readers
3635 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The article title is click bait here is the full article:

Wondering what your career looks like in our increasingly uncertain, AI-powered future? According to Palantir CEO Alex Karp, it’s going to involve less of the comfortable office work to which most people aspire, a more old fashioned grunt work with your hands.

Speaking at the World Economic Forum yesterday, Karp insisted that the future of work is vocational — not just for those already in manufacturing and the skilled trades, but for the majority of humanity.

In the age of AI, Karp told attendees at a forum, a strong formal education in any of the humanities will soon spell certain doom.

“You went to an elite school, and you studied philosophy; hopefully you have some other skill,” he warned, adding that AI “will destroy humanities jobs.”

Karp, who himself holds humanities degrees from the elite liberal arts institutions of Haverford College and Stanford Law, will presumably be alright. With a net worth of $15.5 billion — well within the top 0.1 percent of global wealth owners — the Palantir CEO has enough money and power to live like a feudal lord (and that’s before AI even takes over.)

The rest of us, he indicates, will be stuck on the assembly line, building whatever the tech companies require.

“If you’re a vocational technician, or like, we’re building batteries for a battery company… now you’re very valuable, if not irreplaceable,” Karp insisted. “I mean, y’know, not to divert to my usual political screeds, but there will be more than enough jobs for the citizens of your nation, especially those with vocational training.”

Now, there’s nothing wrong with vocational work or manufacturing. The global economy runs on these jobs. But in a theoretical world so fundamentally transformed by AI that intellectual labor essentially ceases to exist, it’s telling that tech billionaires like Karp see the rest of humanity as their worker bees.

It seems that the AI revolution never seems to threaten those who stand to profit the most from it — just the 99.9 percent of us building their batteries.

you are viewing a single comment's thread
view the rest of the comments
[–] OR3X@lemmy.world 150 points 20 hours ago* (last edited 20 hours ago) (30 children)

These morons really think AI is going to allow them to replace the technical folks. The same technical folks they severely loathe because they're the ones with the skills to build the bullshit they dream up, and as such demand a higher salary. They're so fucking greedy that they are just DYING to cut these people out in order to make more profits. They have such inflated egos and so little understanding of the actual technology they really think they're just going to be able to use AI to replace technical minds going forward. We're on the precipice of a very funny "find out" moment for some of these morons.

[–] Pika@sh.itjust.works 8 points 18 hours ago* (last edited 18 hours ago) (5 children)

The scary part is how it already somewhat is.

My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.

the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn't work -> AI lints the final product once working -> AI submits the patch as pull.

Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says "yes this is good" or "no send this back in"

[–] PrejudicedKettle@lemmy.world 20 points 18 hours ago* (last edited 18 hours ago) (1 children)

I feel like so much LLM-generated code is bound to deteriorate code quality and blow out of the context size to such an extent that the LLM is eventually gonna become paralyzed

[–] Pika@sh.itjust.works 9 points 18 hours ago (1 children)

I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.

They told me their main concern is that they aren't sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it's their job to give final say of "yes this is good"

[–] WanderingThoughts@europe.pub 9 points 17 hours ago* (last edited 16 hours ago)

At that point they're just the responsibility circuit breaker, put there to get the blame if things go wrong.

load more comments (3 replies)
load more comments (27 replies)