Unionize
next question
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
Unionize
next question
AI is a tech debt generator.
Any programmer who worked with legacy code knows a situation where something was written by a former employee or a contractor without much comments or documentation, making it difficult to modify (because of complexity or readability) or replace (because of non-existing business documentation and/or peculiar bugs and features)
AI accelerates these situations, but the person does not even exist. Which, IMO is the main thing that needs to be called out.
Yeah I’ve been trying to call this out at my company. Junior programmers, especially , do t seem to know how to turn ai responses into maintainable code
I find it ironic since ive mostly been on the QA side of dev. I’ve spent decades pointing out the stats that code is much more expensive to maintain than it is to write the first time, so now AI puts us in a position of writing something the first time a little faster, but that’s even more expensive to maintain. Does not compute
Not if you use it correctly. You don't write code with AI, you get inspiration to get over sticking points. You pick out the relevant bits, make certain you understand how they work, save hours of banging your head.
Ah yes, "just use it correctly". All these programmers convinced that they are one of the chosen few that "get it" and can somehow magically make it not a damaging, colossal waste of time.
"Inspiration", yeah, in the same way we can draw "inspiration" from a monkey throwing shit at a wall.
Not if you use it correctly.
Ah! "Git gud" elitism to paper over the risk.
The issue still stands: what few seniors you still have at the shop who can tell people WHY something is a bad idea, are now distracted with juniors submitting absolute shit code for review and needing to be taught why that structure is a bad idea.
"Well everyone else is doing it" was a bad rebuttal when you wanted to go to Chuck's party and Mom said no. Laundering "this is what everyone else writes" through an Ai concentrator when 2 generations of coders are self-taught and unmentored after the great post-y2k purge of mentors and writers, isn't a better situation.
Well, yeah. Saying you should be competent at your job isn’t “elitism”. It’s like how someone can write some code by copy pasting from stack exchange, but not know how or why it works and it might not be well written. That’s no different to using AI and not knowing how or why it works and it might not be well written.
AI is a tool. The AI haters like yourself need to understand this, instead of thinking it’s a person.
AI for the win in figuring out how to use code libraries with minimal to non-existent documentation scattered accross the entire web.
Moving away from GitHub to other git hosting sites.
Abandoning forges would make it harder for humans while bots could still download any publicly available repo.
E: Looks that I misread "to" as "and."
No. You archive your GH code with the readme.md saying all new stuff is at gitlab, codeburg, bit bucket, etc. And a link to it.
There’s a lot more you can dig into, and that is by no means an exhaustive list. The more you learn about the nuance of how this shit works, the more you’ll be able to poke huge fucking holes in pretty much any argument anyone makes.
I mean, agentic AIs are getting good at outputting working code. Thousands of lines per minute; talking trash of it won't work.
However, I agree that losing the human element of writing code is losing a very important element of programming. So, I believe there should exist a strong resistance against this. Don't feel pressured to answer if you think your plans shouldn't be revealed, but it would be nice to know if someone is preparing a great resistance out there.
they are not good at consistently following best practices or architectural instructions. So you have to have some kind of hierarchical goal/context scope framework - But then the high-level goals actually need to be reasoned about, which LLMs don’t do, so efforts to make the framework analyze/plan/reflect In order to select and sub divide those top goals fail.
I have to fight with Claude to get it to just do three or four back-and-forth questions with me to establish the actual requirement instead of dumping 1000 lines of irrelevant code (And an MD document, and a usage guide, and an test suite) that ignores guidelines I had already given it.
This is honestly a lot of the problem: code generation tools can output thousands of lines of code per minute. Great, committable, defendable code.
There is basically no circumstance in which a project's codebase growing at a rate of thousands of lines per minute is a good thing. Code is a necessary evil of programming: you can't always avoid having it, but you should sure as hell try, because every line of code is capable of being wrong and will need to be read and understood later. Probably repeatedly.
Taking the approach to solving a problem that involves writing a lot of code, rather than putting in the time to find the setup that lets you express your solution in a little code, or reworking the design so code isn't needed there at all, is a mistake. It relinquishes the leverage that is very point of software engineering.
A tool that reduces the effort needed to write large amounts of human-facing, gets-committed-to-the-source-tree code, so that it's much easier and faster than finding the actual right way to parse your problem, is a tool that makes your project worse and that makes you a worse programmer when you hold it.
Maybe eventually someone will create a thinking machine that itself understands this, but it probably won't be someone who charges by the token.
This is why Pull Requests and approvals exist though. If I am reviewing a PR and it takes 400 lines of code to do something that should be 25 lines, I’ll pick that up in my review, leave feedback, and send it back.
It's just a greater level of abstraction. First we talked to the computers on their own terms with punch cards.
Then Assembly came along to simplify the process, allowing humans to write readable code while compiling into Machine Code so the computers can run it.
Then we used higher-level languages like C to create the Assembly Code required.
Then we created languages like Python, that were even more human-readable, doing a lot more of the heavy lifting than C.
I understand the concern, but it's just the latest step in a process that has been playing out since programming became a thing. At every step we give up some control, for the benefit of making our jobs easier.
I disagree. Even high level languages will consistently produce the same results. There may be low level differences depending on the compiler and the system's architecture but if those are consistent you will get the same results.
AI coding isn't an extremely human readable higher level programming language. Using an LLM to generate code adds a literal black box and the interpretation of the user and LLM's human language (which humans can't even do consistently) to the equation.
That's fair, but I'm not arguing that it's a higher-level language. I was trying to illustrate that it's just to help people code more easily - as all of the other steps were.
If you asked ten programmers to turn a given set of instructions into code, you'd end up with ten different blocks of code. That's the nature of turning English into code.
The difference is that this is a tool that does it, not a person. You write things in English, it produces code.
FWIW, I enjoy using a hex-editor to tinker around with Super Famicom ROMs in my free time - I'm certainly not anti-coding. As OP said, though, AI is now pretty good at generating working code - it's daft not to use it as a tool.
I don't think it's at the point where it helps people code more easily, but maybe I'm just exclusively experiencing edge cases and turning to it for the wrong uses. I've only had failures. Hallucinations that waste my time, and flawed algorithms.
My favorite was a few weeks ago when I was having a rough day and needed a complicated algorithm to make a decision based on an inputted date. I told it that if I plug in value A to its algorithm, the answer is wrong. It went step by step explaining its "reasoning"and it returned the correct answer and then at the pivotal step it plugged in a different year than was in A, for just that step, and then proceeded to confirm to itself that if you plug in A, you get the right answer.
Maybe someday it will help, or maybe some problems it is useful for, I've just never had that experience.
I still look for answers on stack overflow, instead of waiting for an AI summary of the same answer
Brazil must invent Lua 2.
If you're asking about they're directly combating AI, there are efforts to poison AI, basically injecting it with malware.
Most programmers are embracing ai. As its the use case where it acts as the biggest force multiplyer.
Shhhh don't tell them. We're trying to leave these guys in the dust.
They will adapt or die. If they haven't adapted already telling them isn't gonna change their minds.