what would socialists/communists do?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I'd never ask a friggin machine to do coding for me, that's MY blast.
That said, I've had good luck asking GPT specific questions about multiple obscure features of Javascript, and of various browsers. It'll often feed me a sample script using a feature it explains ... a lot more helpful than many of the wordy websites like MDN ... saving me shit-tons of time that I'd spend bouncing around a half-dozen 'help' pages.
I've been using it to code a microservice as PoC for semantic search. As I've basically never coded Python (mainly PHP, but can do many langs) I've had to rely on AI (Kimi K2, or agentic Claude I think 4.5 or 4, can't remember) because I don't know the syntax, features, best practices, and tools to use for formatting, static analysis, and type checks.
Mind you, I've basically never coded in Python besides some shit in uni, which was 5-10 years ago. AI was a big help - albeit it didn't spit out fully working code, I have enough knowledge in this field to fix the issues. As I learn mainly by practice and not theory, AI is great because - same as many YouTubers and free tutorials - it spits out unoptimized and broken code.
I am usually not using it for my main line of work (PHP) besides some boiler plate (take this class, make a test, make it look the same as this other test = 300 lines I don't have to write myself).
I find if I ask it about procedures that have any vague steps AI will stumble on it and sometimes put me into loops where it tells me to do A, A fails, so do B, B fails, so it tells me to do A...
Almost as if it was made to simulate human output but without the ability to scrutinize itself.
To be fair most humans don't scrutinize themselves either.
(Fuck AI though. Planet burning trash)
This is news?
It's like having a lightning-fast junior developer at your disposal. If you're vague, he'll go on shitty side-quests. If you overspecify he'll get overwhelmed. You need to break down tasks into manageable chunks. You'll need to ask follow-up questions about every corner case.
A real junior developer will have improved a lot in a year. Your AI agent won't have improved.
This is the real thing. You can absolutely get good code out of AI, but it requires a lot of hand holding. It helps me speed some tasks, especially boring ones, but I don't see it ever replacing me. It makes far too many errors, and requires me to point them out, and to point in the direction of the solution.
They are great at churning out massive amounts of code. They're also great at completely missing the point. And the massive amount of code needs to be checked and reviewed. Personally I'd rather write the code and have the AI review it. That's a much more pleasant way to work, and that way it actually enhances quality.

Anyone blindly having AI write their code is an absolute moron.
Anyone with decent experience (5-10 years, maybe 10+?) can absolutely fucking skyrocket their output if they properly set up their environments and treat their agents as junior devs instead of competent programmers. You shouldn't trust generated code any more than you trust someone fresh out of college, but they produce code in seconds instead of weeks.
I have tripled my output while producing more secure code (based on my security audits), safer code (based on code coverage and security audits), and less error-prone code (based on production logs and our unchanged QA process).
Now, the ethical issues and environmental issues, I 100% can get behind. And I have no idea what companies are going to do in 10 years when they have to replace people like me and haven't been hiring or training replacements. But the productivity and quality debates are absolutely ridiculous, as long as a strong dev is behind the wheel and has been trained to use the tools.
Consider: the facts
People are very bad at judging their own productivity, and AI consistently makes devs feel like they are working faster, while in fact slowing them down.
I've experienced it myself - it feels fucking great to prompt a skeleton and have something brand new up and running in under an hour. The good chemicals come flooding in because I'm doing something new and interesting.
Then I need to take a scalpel to a hundred scattered lines to get CI to pass. Then I need to write tests that actually test functionality. Then I start extending things and realize the implementation is too rigid and I need to change the architecture.
It is as this point that I admit to myself that going in intentionally with a plan and building it myself the slow way would have saved all that pain and probably got the final product shipped sooner, even if the prototype was shipped later.
No shit.
I actually believed somebody when they told me it was great at writing code, and asked it to write me the code for a very simple lua mod. It’s made several errors and ended up wasting my time because I had to rewrite it.
In a postgraduate class, everyone was praising ai, calling it nicknames and even their friend (yes, friend), and one day, the professor and a colleague were discussing some code when I approached, and they started their routine bullying on me for being dumb and not using ai. Then I looked at his code and asked to test his core algorithm that he converted from a fortran code and "enhanced" it. I ran it with some test data and compared to the original code and the result was different! They blindly trusted some ai code that deviated from their theoretical methodology, and are publishing papers with those results!
Even after showing the different result, they didn't convince themselves of anything and still bully me for not using ai. Seriously, this shit became some sort of cult at this point. People are becoming irrational. If people in other universities are behaving the same and publishing like this, I'm seriously concerned for the future of science and humanity itself. Maybe we should archive everything published up to 2022, to leave as a base for the survivors from our downfall.

A computer is a machine that makes human errors at the speed of electricity.