keegomatic

joined 2 years ago
[–] keegomatic@lemmy.world 0 points 4 days ago (1 children)

violates licenses

Not a problem if you believe all code should be free. Being cheeky but this has nothing to do with code quality, despite being true

do the thinking

This argument can be used equally well in favor of AI assistance, and it’s already covered by my previous reply

non-deterministic

It’s deterministic

brainstorming

This is not what a “good developer” uses it for

[–] keegomatic@lemmy.world 1 points 6 days ago (3 children)

We have substantially similar opinions, actually. I agree on your points of good developers having a clear grasp over all of their code, ethical issues around AI (not least of which are licensing issues), skill loss, hardware prices, etc.

However, what I have observed in practice is different from the way you describe LLM use. I have seen irresponsible use, and I have seen what I personally consider to be responsible use. Responsible use involves taking a measured and intentional approach to incorporating LLMs into your workflow. It’s a complex topic with a lot of nuance, like all engineering, but I would be happy to share some details.

Critical review is the key sticking point. Junior developers also write crappy code that requires intense scrutiny. It’s not impossible (or irresponsible) to use code written by a junior in production, for the same reason. For a “good developer,” many of the quality problems are mitigated by putting roadblocks in place to…

  1. force close attention to edits as they are being written,
  2. facilitate handholding and constant instruction while the model is making decisions, and
  3. ensure thorough review at the time of design/writing/conclusion of the change.

When it comes to making safe and correct changes via LLM, specifically, I have seen plenty of “good developers” in real life, now, who have engineered their workflows to use AI cautiously like this.

Again, though, I share many of your concerns. I just think there’s nuance here and it’s not black and white/all or nothing.

[–] keegomatic@lemmy.world 7 points 6 days ago (6 children)

You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.

[–] keegomatic@lemmy.world 5 points 1 month ago

Ah, okay, I understand now. Rocks are nutritious—and whisker pants.

[–] keegomatic@lemmy.world 4 points 1 month ago (3 children)

Out of curiosity, would you explain your reply and your immediate parent’s comment for me? “Sez” - a bit old but didn’t seem too weird, but then: “date of poisoning” - are you implying an LLM wrote that and “sez” has something to do with pinpointing some poisoning of the model?

[–] keegomatic@lemmy.world 2 points 2 months ago (2 children)

I think we mostly agree. And I do agree that “flawed security can be worse than no security at all.” I think, though, that this doesn’t make security worse, just that it doesn’t make it that much better.

But even simple filters can make a significant difference: maybe you remember the early-ish Lemmy debacle of turning off captchas for signups by default, ostensibly because captchas are now completely defeated… which led to thousands and thousands of bot accounts being created pretty much immediately across a bunch of instances, and the feature being turned back on by default.

[–] keegomatic@lemmy.world 3 points 2 months ago* (last edited 2 months ago) (5 children)

Both things can be true. It definitely is better for security. It’s pretty much indisputably better for security.

But you know what would be even better for security? Not allowing any third-party code at all (i.e., no apps).

Obviously that’s too shitty and everyone would move off of that platform. There’s a balance that must be struck between user freedom and the general security of a worldwide network of sensitive devices.

Users should be allowed to do insecure things with their devices as long as they are (1) informed of the risks, (2) prevented from doing those things by accident if they are not informed, and (3) as long as their actions do not threaten the rest of the network.

Side-loading is perfectly reasonable under those conditions.

[–] keegomatic@lemmy.world 2 points 4 months ago (1 children)

What is this, 2007?

[–] keegomatic@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

I think it’s pretty obvious what the difference is between literal toddlers getting to vote (read: parents getting multiple votes) and black people/women getting to vote. I’m not arguing for meritocracy or even what the voting age should be. I’m arguing against the idea that there should be no minimum voting age.

[–] keegomatic@lemmy.world 0 points 5 months ago (2 children)

That is just a very stupid idea. The best thing for all of us everywhere is for the most rational and well-informed people to vote. The fact that everyone gets a vote is unfortunate for all of us because that includes voters who vote against the public interest, but it is necessary for a free democracy. Children and even teenagers have simply not had enough time on this earth to make an informed decision. Even if you want to make the argument that some are informed enough, they are far, FAR fewer than in the adult populace. You do not want to broaden that window.

[–] keegomatic@lemmy.world 0 points 5 months ago (1 children)

There is legitimate research on the effects of ingesting methylene blue. Don’t confuse that with pseudoscience. There’s probably plenty of pseudoscience around it, but it’s not (at its core) naturopathy/homeopathy/voodoo.

view more: next ›