zbyte64

joined 1 year ago
[–] zbyte64@awful.systems 2 points 2 months ago* (last edited 2 months ago) (5 children)

Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.

[–] zbyte64@awful.systems 4 points 2 months ago

Why would you ever yell at an employee unless you're bad at managing people? And you think you can manage an LLM better because it doesn't complain when you're obviously wrong?

[–] zbyte64@awful.systems 4 points 2 months ago* (last edited 2 months ago) (2 children)

A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.

[–] zbyte64@awful.systems 3 points 2 months ago (8 children)

It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

I usually write 3x the code to test the code itself. Verification is often harder than implementation.

[–] zbyte64@awful.systems 8 points 2 months ago

DOGE has entered the chat

[–] zbyte64@awful.systems 3 points 2 months ago (2 children)

When LLMs get it right it's because they're summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.

[–] zbyte64@awful.systems 3 points 2 months ago

Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened....

[–] zbyte64@awful.systems 5 points 2 months ago

The new talking point is that man made climate change is real but burning oil isn't causing the world to warm. But that does mean we can geoengineer our climate to be cooler. 🙃

[–] zbyte64@awful.systems 1 points 2 months ago* (last edited 2 months ago)

If you think critics of wokeness are wrong, then show why. Don’t just insult them and pretend that counts as insight.

Why would someone take the time to explain something to someone arguing in bad faith? Sounds like a foolish endeavor.

I'll leave you with the words from OP elsewhere in this thread because it equally applies to you:

Thanks, but I didn’t ask that and your assertion is based on your own bias/opinion

[–] zbyte64@awful.systems 2 points 2 months ago* (last edited 2 months ago) (2 children)

Yes I had an inflammatory response. I honestly don't perceive OP as making a good faith argument when they say "negative effects of wokeness". It's a thought terminating cliche.

[–] zbyte64@awful.systems 2 points 2 months ago (4 children)

Okay then, swap out AI with wokeness, it still doesn't come to the level of a "worldview". It is still an observation.

[–] zbyte64@awful.systems 2 points 2 months ago (6 children)

everyone who disagrees with my worldview is a bot

I hardly consider my opinion on AI a "worldview". It is an observation that generative AI use in decision making and creativity reduces cognitive activity. Yes I asked OP to disprove me in an "ad-hominem" manner though. I guess we violently agree on that?

view more: ‹ prev next ›