this post was submitted on 12 Oct 2025
64 points (92.1% liked)

No Stupid Questions

44043 readers
833 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

Do you have any ideas or thoughts about this?

you are viewing a single comment's thread
view the rest of the comments
[–] obbeel@lemmy.eco.br 8 points 1 week ago (3 children)

I mean, agentic AIs are getting good at outputting working code. Thousands of lines per minute; talking trash of it won't work.

However, I agree that losing the human element of writing code is losing a very important element of programming. So, I believe there should exist a strong resistance against this. Don't feel pressured to answer if you think your plans shouldn't be revealed, but it would be nice to know if someone is preparing a great resistance out there.

[–] aubeynarf@lemmynsfw.com 13 points 1 week ago

they are not good at consistently following best practices or architectural instructions. So you have to have some kind of hierarchical goal/context scope framework - But then the high-level goals actually need to be reasoned about, which LLMs don’t do, so efforts to make the framework analyze/plan/reflect In order to select and sub divide those top goals fail.

I have to fight with Claude to get it to just do three or four back-and-forth questions with me to establish the actual requirement instead of dumping 1000 lines of irrelevant code (And an MD document, and a usage guide, and an test suite) that ignores guidelines I had already given it.

[–] planish@sh.itjust.works 7 points 1 week ago (1 children)

This is honestly a lot of the problem: code generation tools can output thousands of lines of code per minute. Great, committable, defendable code.

There is basically no circumstance in which a project's codebase growing at a rate of thousands of lines per minute is a good thing. Code is a necessary evil of programming: you can't always avoid having it, but you should sure as hell try, because every line of code is capable of being wrong and will need to be read and understood later. Probably repeatedly.

Taking the approach to solving a problem that involves writing a lot of code, rather than putting in the time to find the setup that lets you express your solution in a little code, or reworking the design so code isn't needed there at all, is a mistake. It relinquishes the leverage that is very point of software engineering.

A tool that reduces the effort needed to write large amounts of human-facing, gets-committed-to-the-source-tree code, so that it's much easier and faster than finding the actual right way to parse your problem, is a tool that makes your project worse and that makes you a worse programmer when you hold it.

Maybe eventually someone will create a thinking machine that itself understands this, but it probably won't be someone who charges by the token.

[–] FreedomAdvocate 1 points 1 week ago

This is why Pull Requests and approvals exist though. If I am reviewing a PR and it takes 400 lines of code to do something that should be 25 lines, I’ll pick that up in my review, leave feedback, and send it back.

[–] Apepollo11@lemmy.world 6 points 1 week ago (1 children)

It's just a greater level of abstraction. First we talked to the computers on their own terms with punch cards.

Then Assembly came along to simplify the process, allowing humans to write readable code while compiling into Machine Code so the computers can run it.

Then we used higher-level languages like C to create the Assembly Code required.

Then we created languages like Python, that were even more human-readable, doing a lot more of the heavy lifting than C.

I understand the concern, but it's just the latest step in a process that has been playing out since programming became a thing. At every step we give up some control, for the benefit of making our jobs easier.

[–] theparadox@lemmy.world 5 points 1 week ago (1 children)

I disagree. Even high level languages will consistently produce the same results. There may be low level differences depending on the compiler and the system's architecture but if those are consistent you will get the same results.

AI coding isn't an extremely human readable higher level programming language. Using an LLM to generate code adds a literal black box and the interpretation of the user and LLM's human language (which humans can't even do consistently) to the equation.

[–] Apepollo11@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (1 children)

That's fair, but I'm not arguing that it's a higher-level language. I was trying to illustrate that it's just to help people code more easily - as all of the other steps were.

If you asked ten programmers to turn a given set of instructions into code, you'd end up with ten different blocks of code. That's the nature of turning English into code.

The difference is that this is a tool that does it, not a person. You write things in English, it produces code.

FWIW, I enjoy using a hex-editor to tinker around with Super Famicom ROMs in my free time - I'm certainly not anti-coding. As OP said, though, AI is now pretty good at generating working code - it's daft not to use it as a tool.

[–] theparadox@lemmy.world 1 points 1 week ago

I don't think it's at the point where it helps people code more easily, but maybe I'm just exclusively experiencing edge cases and turning to it for the wrong uses. I've only had failures. Hallucinations that waste my time, and flawed algorithms.

My favorite was a few weeks ago when I was having a rough day and needed a complicated algorithm to make a decision based on an inputted date. I told it that if I plug in value A to its algorithm, the answer is wrong. It went step by step explaining its "reasoning"and it returned the correct answer and then at the pivotal step it plugged in a different year than was in A, for just that step, and then proceeded to confirm to itself that if you plug in A, you get the right answer.

Maybe someday it will help, or maybe some problems it is useful for, I've just never had that experience.