this post was submitted on 10 Mar 2026
713 points (99.3% liked)

Technology

82518 readers
4873 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

(page 2) 50 comments
sorted by: hot top controversial new old
[–] otter@lemmy.ca 81 points 1 day ago (3 children)

...

Do the senior engineers NOT sign off on changes to systems that can take down the production servers? Even if we take out the LLM created code, this sounds like a bigger problem

[–] pageflight@piefed.social 41 points 1 day ago (2 children)

We may start to see people realize that "have the AI generate slop, humans will catch the mistakes" actually is different from "have humans generate robust code."

[–] daychilde@lemmy.world 29 points 1 day ago (3 children)

Not only that, but writing code is so much easier than understanding code you didn't write. Seems like either you need to be able to trust the AI code, or you're probably better of writing it yourself. Maybe there's some simple yet tedious stuff, but it has to be simple enough to understand and verify faster than you could write it. Or maybe run code through AI to check for bugs and check out any bugs it finds…

I definitely have trusted AI to write miniature pointless little projects - like a little PHP page that loaded music for the current directory and showed a simple JS player in a webpage so I could share Christmas music with my family and friends. No database, no file uploading or anything. It worked decently, although not perfectly, and that's all it needed to do.

[–] slaacaa@lemmy.world 13 points 1 day ago* (last edited 21 hours ago)

This is true not just with code, but with many types of complex outputs. Going through and fixing somebody’s horrible excel model is much worse than building a good one yourself. And if the quality is really bad, it’s also just faster to do it yourself from scratch.

[–] MirrorGiraffe@piefed.social 7 points 1 day ago (1 children)

I've been writing a slightly larger project with frontend, bff and backend and I need to take it in small batches so that I can catch when it misunderstands or outright does a piss job of implementing something. I've been focusing a lot on getting all the unit tests I need in place which makes me feel a bunch better.

The bigger and more complex the projects get, the harder it is for the LLM to keep stuff in context which means I'll have to improve my chunking out smaller scoped implementations or start writing code myself I think. 

All in all I feel pretty safe with my project and pleased with the agents work but I need to increase testing further before bringing anything live.

[–] daychilde@lemmy.world 5 points 23 hours ago (3 children)

Security testing will be the most important.

I've done a couple of tiny projects that I didn't feel like coding. So far, I have not been terribly impressed. Well, it is impressive that it can make something functional at all, and in one case, what it made was fine enough to use as the temporary project it was intended (sharing christmas music with friends/family - reading files from a directory and writing a javascript player to play them in a shuffled order).

In the other case, replicating a simple text-based old DOS game with simple rules (think a space-based game around the complexity of checkers or so), it failed to think of so many things that while it did what I told it for the most part, it wasn't a playable game. It was close, and fun enough for a nostalgic moment, but I had to work with it on logic like "If two fleets of ships arrive at the same planet in the same turn, you have to see how the first battle goes. If the first battle captures the planet, the second fleet is not attacking the first fleet's ships - we won the planet at that point". Very simple concepts that sure, you'd have to think of as a programmer, but if you were telling another person about how the game should work, were things I felt another person would think about.

I hope AI works well for you. Anywhere security it needed like database sanitation or user credentials....... I hope you test thoroughly and I hope you can tell it enough to remind it to implement things like sanitation and other safety measures. An app can certainly appear to be working, but give many many fronts for attack. That's my main worry with AI code. I worry enough on the little projects I do if I'm being secure enough myself.

load more comments (3 replies)
[–] Hupf@feddit.org 6 points 1 day ago

Yeah, initially writing the code never was the time sink.

load more comments (1 replies)
[–] PattyMcB@lemmy.world 29 points 1 day ago (2 children)

I guarantee there's so much pressure on those engineers to deliver code that they rubber stamp a ton of it with the intention of "fixing it later"

Source: I've worked in software for 20+ years and know a lot of folks working for and who have worked for Amazon

[–] PabloSexcrowbar@piefed.social 10 points 1 day ago (2 children)

That's basically the story at all the big tech companies, from what I've heard. In my time at Facebook, I felt like the only person who actually read the merge requests that people sent me before hitting it with "LGTM"

load more comments (2 replies)
load more comments (1 replies)
[–] mrgoosmoos@lemmy.ca 9 points 1 day ago

the way private companies work is that they require their employees to produce more than is reasonable given the work quality that is expected.

when this discrepancy is pointed out, it's handwaved away. when the discrepancy results in problems, as it most obviously will, somebody is found to place the blame on.

it's not the developer's faults. it's a management decision.

source: I'm talking out of my ass I'm just a salty employee who is seeing this happen at their own workplace when it didn't used to, at least not to this level

[–] pirate2377@lemmy.zip 15 points 1 day ago

Keep taking Ls Amazon!

[–] FosterMolasses@leminal.space 7 points 1 day ago
[–] Airfried@piefed.social 23 points 1 day ago (1 children)

The way AI is being pushed onto workers on a global scale has to be the dumbest thing to ever happen in the work space. Executives are getting hysterical over something they don't even try to understand and even governments shower companies in subsidies if they do anything with AI. Of course the only result so far are mass layoffs and exploding costs for energy and hardware. All the while economies are crumbling everywhere because of course they do when mass unemployment sweeps around the globe. And again, governments everywhere are subsiding this crap with tax payer money. What's even worse than all of that is the insane environmental damage all of this causes. But I'll have to cut myself short here because I'm just getting increasingly upset here.

I guess what I'm trying to say is: We're funding our own decline in rapid speed. Human stupidity has found a new peak in 2026 and it's not even close. I knew the way AI was advertised was completely overblown years ago but I never anticipated it would get this bad this quickly.

[–] merdaverse@lemmy.zip 5 points 23 hours ago* (last edited 23 hours ago)

Unsurprisingly, there's a disconnect between executives/middle managers and people actually doing the job. The first group has fallen for the 10x productivity boost ads that the AI companies were selling them, while the actual boost for developers has been minimal, if any. That's why it's being pushed hard from the top.

[–] IchNichtenLichten@lemmy.wtf 13 points 1 day ago (1 children)

They want to move fast and break things but they still want a few meat bags around to blame when things inevitable blow up in their faces.

load more comments (1 replies)
[–] badbytes@lemmy.world 14 points 1 day ago (1 children)

LOL, so they can blame and fire SOMEONE.

[–] merc@sh.itjust.works 1 points 20 hours ago

Amazon saves the wages of a senior dev by doing that, but then they get outages costing them the wages of that senior dev for decades. I doubt the goal is to blame senior devs. If they wanted them gone they could easily fire them.

[–] ParlimentOfDoom@piefed.zip 14 points 1 day ago (1 children)

Aren't their names already on the commits? Or is the AI given write access to their code repository?

[–] JcbAzPx@lemmy.world 16 points 1 day ago

I think you already know the answer to that.

[–] hperrin@lemmy.ca 10 points 1 day ago

xD

Guess that all-in-on-AI attitude was not such a bold and brilliant idea after all.

load more comments
view more: ‹ prev next ›