jj4211

joined 2 years ago
[–] jj4211@lemmy.world 6 points 1 day ago

In my experience, the bigger the codebase gets, the more confounded LLM gets at trying to make coherent changes. So LLM projects start on shaky ground and just get worse because they can't maintain the stuff they themselves generated.

I've seen what LLM can do and it is certainly interesting and can do some stuff, but the vast majority of my experience is someone who had not coded before "vibing" themselves into a corner and demanding help to dig them out. A bit irritating because while before we could reasonably prioritize requests to do stuff because management understood making something from nothing was real work, now management says "they aren't asking you to make something, just help them fix something that already exists, should be easy!"

On the ELOC metric, for a long time I pointed out how disastrous I must be because my contribution to a project I was on was about -10,000 lines of code by the time I went to something else.

[–] jj4211@lemmy.world 5 points 1 day ago

While I despise the captchas from a human perspective, the fact that an LLM can solve the challenge isn't a deal breaker. It doesn't need to be impossible for a non-human to solve, it just has to be too expensive.

It does certainly shift the equation to stuff like proof of work since a computer can solve it anyway, might as well not annoy the human.

[–] jj4211@lemmy.world 3 points 1 day ago

Seems utterly pointless though...

With the proof of work approach, at least it's demanding the client consume some resources, though the 'right' amount is a tricky question, either it's so trivial as to hardly matter to the scrapers, or it's hard enough to put a dent in the scrapers' build, but human operated low end devices are royally screwed..

Here the crawler simply schedules a resumption and moves on to other work. The crawler doesn't need it right now and it's free for it to wait.

[–] jj4211@lemmy.world 1 points 1 day ago

Yeah, seems like the problem is that fundamentally it could work by upping the difficulty a smidge making it then meaningfully expensive, but the spread between slowest edge device and high end means it's impossible to chase that difficulty without screwing over low end device users..

[–] jj4211@lemmy.world 14 points 1 day ago (2 children)

They are no longer with us.

Hey, I'm annoyed by slop coding work as much as the next guy, but murder seems a bit much as a reaction...

[–] jj4211@lemmy.world 55 points 1 day ago (5 children)

Fun story from this week, we had a chore for the frontend to refresh to a new version of the UI framework. Fairly simple task, so off to a junior developer. Within a couple hours there was a merge request ready to go. Ok, a fairly normal amount of time to change version and at least do a sniff test and find nothing changed so I go in assuming I'll look at a few version bumps, maybe one or two tweaks... I see the junior dev was proposing over 1,000 lines of code to be added... WTF...

I crack it open and there was just a firehose of css rules, all marked '!important'. Looking at one examlpe, it repeated the same classifier with the same exact bunch of rules 5 times in a row. It was like it found every possible derived css class combination with tag and defined !important CSS for most everything about it.

So I find out that the junior dev asked it to rebase and it did what he expected, just change some version and went. He tried it and due to a framework change, one element was misaligned by a little bit. So he gave the feedback to the LLM and tried again... and it failed, and he tried again and it failed and after 5 rounds, it finally got the element aligned and hit 'merge request'. For fun I opened up his proposed change and just so much was just a bit dodgy css wise because it screwed with so much stuff, but the junior dev only concerned himself with the page as it opened.

So I said screw it, I'll do it myself, and added the singular rule that was needed to adapt to the framework change, making it overall about a 5 line change including versioning and such.

Depressingly, I suspect an executive would consider me far less productive because I only did 5 lines of change and the junior dev would have done thousands...

[–] jj4211@lemmy.world 3 points 1 day ago (1 children)
[–] jj4211@lemmy.world 1 points 2 days ago (1 children)

My electric bill last month was $15. It eventually is financially worth it.

Don't know how hard you went on your home lab, I use office rather than datacenter equipment and it's quite and plenty for my needs. For my professional test/dev needs I have such equipment at work, so I don't need to home train on gear for the sake of competence in the field.

[–] jj4211@lemmy.world 7 points 3 days ago

Yes, ask why it deleted data when it didn't do anything of the sort and it will still output similar text. You asked it to confess and explain, so it will do just that regardless of whether it fits.

[–] jj4211@lemmy.world 1 points 3 days ago

I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)... But you do raise a pretty good indicator that at least a key thing is not running in container.

But I'm not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I've seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because 'they work'.

In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.

[–] jj4211@lemmy.world 1 points 3 days ago (3 children)

My solar covers more than my entire electric bills in the mild weather months.

I was thinking of moving to more modern systems with modest power consumption too. One of the systems I picked up is basically a case-as-a-heatsink with no fans.

[–] jj4211@lemmy.world 3 points 3 days ago

Sure, though there is a worrying lack of backup and resilience in the described scenario. Has a smell of someone who hasn't been bit yet and not paying attention to best practices in the industry.

I will give a break on some of the things as not necessarily being a 'must', but being hard bound to a singular server strikes me as a disaster waiting to happen.

view more: next ›