zbyte64

joined 2 years ago
[–] zbyte64@awful.systems 1 points 10 months ago (1 children)

When requirements are "Whatever" then by all means use the "Whatever" machine: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/

And then look for a better gig because such an environment is going to be toxic to your skill set. The more exacting the shop, the better they pay.

[–] zbyte64@awful.systems 1 points 10 months ago* (last edited 10 months ago) (3 children)

Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

[–] zbyte64@awful.systems 3 points 10 months ago

The stock market makes no sense to me.

That's because "The market can remain irrational longer than you can remain solvent."

[–] zbyte64@awful.systems 2 points 10 months ago* (last edited 10 months ago) (5 children)

Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.

[–] zbyte64@awful.systems 4 points 10 months ago

Why would you ever yell at an employee unless you're bad at managing people? And you think you can manage an LLM better because it doesn't complain when you're obviously wrong?

[–] zbyte64@awful.systems 4 points 10 months ago* (last edited 10 months ago) (2 children)

A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.

[–] zbyte64@awful.systems 3 points 10 months ago (8 children)

It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

I usually write 3x the code to test the code itself. Verification is often harder than implementation.

[–] zbyte64@awful.systems 8 points 10 months ago

DOGE has entered the chat

[–] zbyte64@awful.systems 3 points 10 months ago (2 children)

When LLMs get it right it's because they're summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.

[–] zbyte64@awful.systems 3 points 10 months ago

Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened....

[–] zbyte64@awful.systems 5 points 10 months ago

The new talking point is that man made climate change is real but burning oil isn't causing the world to warm. But that does mean we can geoengineer our climate to be cooler. 🙃

[–] zbyte64@awful.systems 1 points 10 months ago* (last edited 10 months ago)

If you think critics of wokeness are wrong, then show why. Don’t just insult them and pretend that counts as insight.

Why would someone take the time to explain something to someone arguing in bad faith? Sounds like a foolish endeavor.

I'll leave you with the words from OP elsewhere in this thread because it equally applies to you:

Thanks, but I didn’t ask that and your assertion is based on your own bias/opinion

view more: ‹ prev next ›