this post was submitted on 13 Apr 2025
264 points (99.6% liked)

Technology

69298 readers
3872 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 44 comments
sorted by: hot top controversial new old
[–] FauxPseudo@lemmy.world 74 points 1 week ago (4 children)

But the only way to learn debugging is to have experience coding. So if we let AI do the coding then all the entry level coding jobs go away and no one learns to debug.

This isn't just a code thing. This is all kinds of professions. AI will kill the entry level which will prevent new people from getting experience which will have downstream effects throughout entire industries.

[–] MigratingApe@lemmy.dbzer0.com 31 points 1 week ago (1 children)

It already started happening before LLM AI. Have you heard the joke that we were teaching our parents how to use printers and PCs with mouse and keyboard and now we have to do the same with our children? It’s really not a joke. We are the last generation that have seen it all evolving before our eyes, we know the fundamentals of each layer of abstraction the current technology is built upon. It was natural process for us to learn all of this and now suddenly we expect "fresh people" to grasp 50 years or so of progress in 5 or so years?

Interesting times ahead of us.

[–] Evotech@lemmy.world 2 points 1 week ago
[–] AdamEatsAss@lemmy.world 14 points 1 week ago (1 children)

Have you used any AI for programming? There is 0 chance entry level jobs will be replaced. AI only works well if what it needs to do is well defined, as a dev that is almost never the case. Also companies understand that to create a senior dev they need a junior dev they can train. Also cooperations do not trust Google, openAI, meta, ect with their intellectual property. My company made it a firedable offense if they catch you uploading IP to an AI.

[–] FauxPseudo@lemmy.world 17 points 1 week ago

Also companies understand that to create a senior dev they need a junior dev they can train.

We live in a world where every company wants people that can hit the ground running, requires 5 years of experience for an entry level job on a language that's only been out for three years. On the job training died long ago.

[–] metaldream@sopuli.xyz 7 points 1 week ago

The junior devs are my job are way better at debugging than AI, lol. Granted they are top talent hires because no one else can break in these days.

[–] zenpocalypse@lemm.ee 1 points 1 week ago

In my experience, LLMs are good for code snippets and input on best practices.

I use it as a tool to speed up my work, but I don't see it replacing even entry jobs any time soon.

[–] thefluffiest@feddit.nl 37 points 1 week ago (1 children)

So, AI gets to create problems, and actually capable people get to deal with the consequences. Yeah that sounds about right

[–] WanderingThoughts@europe.pub 27 points 1 week ago (1 children)

And it'll be used to suppress wages, because "you're not making new stuff, just fixing some problems in existing code." That you have to rewrite most of it is conveniently not counted.

That's at least what was tried with movie writers.

[–] sach@lemmy.world 17 points 1 week ago (1 children)

Most programmers agree debugging can be harder than writing code, so basically the easy part is automated, but the more challenging and interesting parts, architecture and the debugging remain for programmers. Still it's possible they'll try to sell it to programmers as less work.

[–] brsrklf@jlai.lu 13 points 1 week ago* (last edited 1 week ago) (1 children)

but the more challenging and interesting parts, architecture and the debugging remain for programmers

And is made harder for them. Because it turns out the "easy" part is not that easy to do correctly, and if not it just makes maintaining the thing miserable.

[–] atrielienz@lemmy.world 8 points 1 week ago

Additionally, as others have said in the thread, programmers learn the skills required for debugging at least partially from writing code. So there goes a big part of the learning curve, turning into a bell curve.

[–] NigelFrobisher@aussie.zone 35 points 1 week ago* (last edited 1 week ago)

I’m actually quite enjoying watching the LLM evangelists fall into the trough of despair after their initial inflated expectations of what they thought stochastic text generation would achieve for the business. After a while you get used to the waves of magic bullet solutions that promise to revolutionise the industry but introduce as many new problems as they solve.

[–] resipsaloquitur@lemm.ee 24 points 1 week ago (1 children)

So we “fixed” the easiest part of software development (writing code) and now humans have to clean up the AI slop.

I’ll bet this lovely new career field comes with a pay cut.

[–] IllNess@infosec.pub 11 points 1 week ago

I would charge more. Fixing my own code is easier than fixing someone elses code.

I think I might go insane if that was my career.

[–] cyrano@lemmy.dbzer0.com 23 points 1 week ago (1 children)

But trust me Bro, AGI is around the corner. In the meantime have this new groundbreaking feature https://decrypt.co/314380/chatgpt-total-recall-openai-memory-upgrade /s

[–] bappity@lemmy.world 15 points 1 week ago

LLMs are so fundamentally different to AGI, it's a wonder people believe that balderdash

[–] hera@feddit.uk 23 points 1 week ago (2 children)

As a very experienced python developer, I have tried using chatgpt for debugging and vibe coding multiple times and you just end up going in circles and never get to a working solution. It ends up being a lot faster just to do it yourself

[–] gigachad@sh.itjust.works 12 points 1 week ago* (last edited 1 week ago) (1 children)

Absolutely agree. I just use it for some simple stuff like "every nth row in a pandas dataframe slice a string from x to y if column z is True" or something like that. These logics take time to write, and GPT usually comes up with a right solution or one that doesn't need a lot of modification.

But debugging or analyzing an error? No thanks

[–] AdamEatsAss@lemmy.world 9 points 1 week ago (2 children)

I have on multiple occasions told it exactly what the error is and how to fix it. The AI agrees, apologizes, and gives me the same broken code again. It takes the same amount of time to describe the error as it would have for me to fix it.

[–] BreadstickNinja@lemmy.world 8 points 1 week ago

This is my experience as well. Best case scenario it gives me a rough idea of what functions to use or how to set up the logic, but then it always screws up the actual implementation. I've never asked ChatGPT for coding help and gotten something I can use off the bat. I always have to rewrite it before it's functional.

[–] spirinolas@lemmy.world 3 points 1 week ago

My rule of thumb is, if he doesn't give you the solution right off the bat he won't give you one. If that happens either fix it yourself or start a new chat and reformulate the question completely.

[–] j0ester@lemmy.world 2 points 1 week ago

Thank you! So many morons saying you can just use Generative AI to build whatever you need. That’s a no..

[–] aaron@lemm.ee 18 points 1 week ago (1 children)

I'm full luddite on this. And fuck all of us.

[–] Serinus@lemmy.world 1 points 1 week ago

"Give me some good warning message css" was a pretty nice use case. It's a nice tool that's near the importance of Google search.

But you have to know when its answers are good and when they're useless or harmful. That requires a developer.

[–] kyub@discuss.tchncs.de 14 points 1 week ago* (last edited 1 week ago)

"AI" is good for pattern matching, generating boiler plate / template code and text, and generating images. Maybe also translation. That's about it. And it's of course often flawed/inaccurate so it needs human oversight. Everything else is like a sales scam. A very profitable one.

[–] SharkAttak@kbin.melroy.org 11 points 1 week ago (1 children)
[–] LinyosT@sopuli.xyz 2 points 1 week ago (1 children)

It’s always the people that don’t have a clue.

It’s also always the people that think they’ll get some benefit out of AI taking over. When they’re absolutely part of the group that’ll be replaced by AI.

It's a cargo cult. They don't understand, but they like what it promises, so they blindly worship. Sceptics become unbelievers, visionaries become prophets and collateral damages become sacrifices.

They may use different terms, but if some job became obsolete, that's just the price of a better future to them. And when the day of Revelation comes, they'll surely be among the faithful delivered from the shackles of human labour to enjoy the paradise built on this technology. Any day now...

[–] Simulation6@sopuli.xyz 11 points 1 week ago (1 children)

Can AI fix itself so that it gets better at a task? I don't see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
Personally, I will always lie to AI when asked for feed back.

[–] taladar@sh.itjust.works 11 points 1 week ago (1 children)

It is worse. People can't even fix AI so it gets better at a task.

[–] jj4211@lemmy.world 4 points 1 week ago

That's been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there's just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won't work and could also make other things worse.

[–] nick@midwest.social 11 points 1 week ago
[–] BangelaQuirkel@lemmy.world 6 points 1 week ago

Are those researchers human or is this just an Ai that’s too lazy to do the work?

[–] bappity@lemmy.world 4 points 1 week ago

the tool can't replace the person or whatever

[–] vegetvs@kbin.earth 4 points 1 week ago

Color me shocked.

Ars Technica would die of an aneurysm if it stopped posting about generative AI for even 30 seconds

as they're the authority on tech, and all they write about is shitty generative AI from 2017, that means shitty generative AI from 2017 is the only tech worth writing about

[–] latenightnoir@lemmy.blahaj.zone 3 points 1 week ago* (last edited 1 week ago)

Well, now they're just subverting expectations left and right, aren't they!

[–] Ceruleum@lemmy.wtf 1 points 1 week ago

Laughs in Cobol.