Why would you ever yell at an employee unless you're bad at managing people? And you think you can manage an LLM better because it doesn't complain when you're obviously wrong?
zbyte64
A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I usually write 3x the code to test the code itself. Verification is often harder than implementation.
DOGE has entered the chat
When LLMs get it right it's because they're summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.
Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened....
The new talking point is that man made climate change is real but burning oil isn't causing the world to warm. But that does mean we can geoengineer our climate to be cooler. 🙃
If you think critics of wokeness are wrong, then show why. Don’t just insult them and pretend that counts as insight.
Why would someone take the time to explain something to someone arguing in bad faith? Sounds like a foolish endeavor.
I'll leave you with the words from OP elsewhere in this thread because it equally applies to you:
Thanks, but I didn’t ask that and your assertion is based on your own bias/opinion
Yes I had an inflammatory response. I honestly don't perceive OP as making a good faith argument when they say "negative effects of wokeness". It's a thought terminating cliche.
Okay then, swap out AI with wokeness, it still doesn't come to the level of a "worldview". It is still an observation.
everyone who disagrees with my worldview is a bot
I hardly consider my opinion on AI a "worldview". It is an observation that generative AI use in decision making and creativity reduces cognitive activity. Yes I asked OP to disprove me in an "ad-hominem" manner though. I guess we violently agree on that?
Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.