this post was submitted on 16 May 2025
91 points (96.0% liked)

No Stupid Questions

40730 readers
985 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.

I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?

Is it that they aren't trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I'm just not aware of?

I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.

top 37 comments
sorted by: hot top controversial new old
[–] I_Dont_Believe_You@feddit.org 0 points 3 hours ago

The code still needs to be reviewable by humans and you can’t do that realistically with machine code. Somebody needs to able to look at the code and see what’s going on without just taking the AI’s word for it.

[–] nek0d3r@lemmy.dbzer0.com -1 points 3 hours ago

Generative AI wasn't made to write good code, nor can it. It was trained to make lazy junior developers pay for a subscription to give code reviewers even bigger headaches.

[–] AA5B@lemmy.world 13 points 15 hours ago (1 children)

You’re assuming ai writes useable code. I haven’t seen it.

Think of the ai more as a writing assistant, like autocomplete or stack overflow but more so. The IDE I use can autocomplete variables or function calls, but the ai can autocomplete entire lines of code or entire unit tests. AI might try to fit an online answer and related doc to solve a problem I’m seeing. AI might even create a class around a public api that is a great starting point for my code. AI can be a useful tool but it can’t write useable code

[–] DeathsEmbrace@lemm.ee 4 points 4 hours ago

I think that's the misconception people think they are going to give you a program if you just tell AI to do it.

[–] nandeEbisu@lemmy.world 11 points 20 hours ago
  1. Machine code is less portable, as new CPU optimizations and instructions are released, its easier to update a compiler to integrate those in its optimizations than regenerate and retest all of your code. Also, if you need to target different OSs, like windows vs MacOs vs Linux its easier to make portable code in something higher level like python or java.

  2. Static analysis to check for things like memory leaks or security vulnerabilities like sql injections are likely easier to do on human readable code rather than assembly.

  3. Its easier for a human to go in an tweak code that is written in human readable language rather than assembly.

[–] Blackmist@feddit.uk 31 points 1 day ago

Because they're trained on open source code and stack overflow answers.

Neither of which are commonly written in assembly.

[–] Shimitar@downonthestreet.eu 104 points 1 day ago (2 children)

They would not be able to.

Ai only mix and match what they have copied from human stuff, and most of code out there is on high level language, not machine code.

In other words, ai don't know what they are doing they just maximize a probability to give you an answer, that's all.

But really, the objective is to provide a human with a more or less correct boilerplate code, and humans would not read machine code

[–] uranibaba@lemmy.world 33 points 1 day ago (1 children)

The compiler is likely better at producing machine code as well, if LLMs could produce it.

[–] riskable@programming.dev 18 points 1 day ago (1 children)

To add to this: It's much more likely that AI will be used to improve compilers—not replace them.

Aside: AI is so damned slow already. Imagine AI compiler times... Yeesh!

[–] naught101@lemmy.world -1 points 1 day ago (2 children)

Strong doubt that AI would be useful for producing improved compilers. That's a task that would require extremely detailed understanding of logical edge cases of a given language to machine code translation. By definition, no content exists that can be useful for training in that context. AIs will certainly try to help, because they are people pleasing machines. But I can't see them being actually useful.

[–] riskable@programming.dev 7 points 1 day ago (2 children)

Umm... AI has been used to improve compilers dating all the way back to 2004:

https://github.com/shrutisaxena51/Artificial-Intelligence-in-Compiler-Optimization

Sorry that I had to prove you wrong so overwhelmingly, so quickly 🤷

[–] naught101@lemmy.world 2 points 14 hours ago

Yeah, as @uranibaba@lemmy.world says, I was using the narrow meaning of AI=ML (as the OP was). Certainly not surprised that other ML techniques have been used.

That Cummins paper looks pretty interesting. I only skimmed the first page, but it looks like they're using LLMs to estimate optimal compiler parameters? That's pretty cool. But they also say something about it having a 91% hit compliant code hit rate, I wonder what's happening in the other 9%. Noncompliance seems like a big problem? But I only have surface-level compiler knowledge, probably not enough to follow the whole paper properly..

[–] uranibaba@lemmy.world 2 points 23 hours ago

Looking at the tags, I only found one with the LLM tag, which I assume naught101 meant. I think people here tend to forget that there is more than one type of AI, and that they have been around for longer than ChatGPT 3.5.

[–] JustJack23@slrpnk.net 3 points 1 day ago (1 children)

I agree but I would clarify that this is true for the current gen of LLMs. AI is much broader subject.

[–] naught101@lemmy.world 2 points 14 hours ago

Yeah, good catch. I know that, but was was forgetting it in the moment.

[–] Thaurin@lemmy.world 13 points 1 day ago

This is not necessarily true. Many models have been trained on assembly code, and you can ask them to produce it. Some mad lad created some scripts a while ago to let AI “compile” to assembly and create an executable. It sometimes worked for simple “Hello, world” type stuff, which is hilarious.

But I guess it is easier for a large language model to produce working code for a higher level programming language, where concepts and functions are more defined in the body that it used to get trained.

[–] Grimy@lemmy.world 21 points 1 day ago (1 children)

You're a programmer? Yes, integrating and debugging binary code would be absolutely ridiculous.

[–] TranquilTurbulence@lemmy.zip 19 points 1 day ago* (last edited 1 day ago)

Debugging AI generated code is essential. Never run the code before reading it yourself and making a whole bunch of necessary adjustments and fixes.

If you jump straight to binary, you can’t fix anything. You can just tell the AI it screwed up, roll the dice and hope it figures out what went wrong. Maybe one day you can trust the AI to write functional code, but that day isn’t here yet.

Then there’s also security and privacy. What if the AI adds something you didn’t want it to add? How would you know, if it’s all in binary?

[–] some_guy@lemmy.sdf.org 13 points 1 day ago (3 children)

No one is training LLMs on machine code. This is sorta silly, really.

Decompiling an executable into human readable code could be useful. But you would probably train on the opcodes, not the machine code.

[–] naught101@lemmy.world 5 points 1 day ago

I think on top of this, the question has an incorrect implicit assumption - that LLMs understand what they produce (this would be necessary for them to produce code in languages other than what they're trained on).

LLMs don't product intelligent output. They produce plausible strings of symbols, based on what is common in a given context. That can look intelligent only in so far as the training dataset contains intelligently produced material.

[–] TauZero@mander.xyz 2 points 1 day ago (1 children)

Language is language. To an LLM, English is as good as Java is as good as machine code to train on. I like to imagine if we suddenly uncovered a library of books left over from ancient aliens, we could train an LLM on it (as long as the symbols themselves are legible), and it would generate stories in the alien language that would sound correct to the aliens, even though the alien world and alien life are completely unknown and incomprehensible to us.

[–] aubeynarf@lemmynsfw.com 3 points 23 hours ago (1 children)

not necessarily, just as interpreting assembly to understand intent is harder than interpreting “resultRows.map(r -> r.firstName)”, additional structure/grammar/semantics are footholds that allow the model to form patterns at a higher level of abstraction

[–] TauZero@mander.xyz 1 points 16 hours ago

Only because it's English and the model is already trained on a large corpus of English text, so it has some idea of what a "table row" is for example. It could learn the concept from reading assembly code from scratch, it would just take longer. Hell, even Lego bricks can be trained on! https://avalovelace1.github.io/LegoGPT/

Our system tokenizes a LEGO design into a sequence of text tokens, ordered in a raster-scan manner from bottom to top. ... At inference time, LegoGPT generates LEGO designs incrementally by predicting one brick at a time given a text prompt.

[–] four@lemmy.zip 19 points 1 day ago

Also, not everything runs on x86. For example, you couldn't write a website in raw binary, because the browser wouldn't run it. Or maybe you already have a Python library and you just need to interact with. Or maybe you want code that can run on x86 and ARM, without having to generate it twice.
As long as the output code has to interact with other code, raw binary won't be useful.

I also expect that it might be easier for an LLM to generate typical code and have a solid and tested compiler turn it into binary

[–] fakeplastic@lemmy.dbzer0.com 12 points 1 day ago* (last edited 1 day ago)

The code is usually crap so yeah like you said it needs to be in a language a person can read easily and fix.

[–] ininewcrow@lemmy.ca 7 points 1 day ago* (last edited 1 day ago)

To me this is a fascinating analogy

This is like having a civilization of Original Creators who are only able to communicate with hand gestures. They have no ears and can't hear sound or produce any vocal noises. They discover a group of humans and raise them to only communicate with their hands because no one knows what full human potential is. The Original Creators don't know what humans are able to do or not do so they teach humans how to communicate with their hands instead because that is the only language that the Original Creators know or can understand.

So now the humans go about doing things communicating in complex ways with their hands and gestures to get things done like their Original Creators taught them.

At one point a group of humans start using vocal communications. The Original Creators can't understand what is being said because they can't hear. The humans start learning basic commands and their vocalizations become more and more complex as time goes on. At one point, their few basic vocal commands are working at the same speed as hand gestures. The humans are now working a lot faster with a lot more complex problems, a lot easier than their Original Creators. The Original Creators are happy.

Now the humans continue development of their language skills and they are able to talk faster and with more content that the Original Creators could ever achieve. Their skills become so well tuned that they are able to share their knowledge a lot faster to every one of their human members. Their development now outpaces the Original Creators who are not able to understand what the humans are doing, saying or creating.

The Original Creators become fearful and frightened as they watch the humans grow exponentially on their own without the Original Creators participation or inclusion.

[–] HubertManne@piefed.social 6 points 1 day ago (2 children)

I would not not want to have any non human reviewed code going out from an AI system.

[–] Lumiluz@slrpnk.net 1 points 21 hours ago (1 children)

Could be useful for reverse engineering and decompilation though no?

[–] HubertManne@piefed.social 1 points 14 hours ago

I would think but I don't have much experience with that.

[–] f43r05@lemmy.ca 3 points 1 day ago (1 children)

This here. Black box machine code, created by a black box, sounds terrifying.

[–] HubertManne@piefed.social 2 points 1 day ago

I mean we know the code does not always work and can often be not the cleanest when it does. I mean if code from ai was perfect in a six sigma way, 99.999% of the time, then I could see the black box thing and just sussing out in the lowers. Even then, any time it does not work you would need to have it give it out in human readable so we could find the bug but if it was that good it should happen like once a year or something.

[–] ExtantHuman@lemm.ee 4 points 1 day ago

... Because they weren't asked to generate that?

[–] dual_sport_dork@lemmy.world 4 points 1 day ago

I imagine this is hypothetically possible given correct and sufficient training data, but that's besides the point I think needs to be made, here.

Basically nothing anyone is programming in user space these days produces machine code, and certainly none of it runs on the bare metal of the processor. Nothing outside of extremely low power embedded microcontroller applications, or dweebs deliberately producing for oldschool video game consoles, or similar anyway.

Everything happens through multiple layers of abstractions, libraries, hardware privilege levels, and APIs provided by your operating system. At the bottom of all of those is the machine code resulting in instructions happening on the processor. You can't run plain machine code simultaneously with a modern OS, and even if you did it'd have to be in x86 protected mode so that you didn't trample the memory and registers in use by the OS or other applications running on it, and you'd have a tough-to-impossible time using any hardware or peripherals including networking, sound, storage access, or probably even putting output on the screen.

So what you're asking for is probably not an output anyone would actually want.

[–] Z3k3@lemmy.world 2 points 1 day ago (1 children)

I think i saw a video a few weeks ago where 2 ai assistants realises the other was also an ai so they agreed to switch to another protocol (to me it sounded like 56k modem noises or old 8 bit cassette tapes played on a hifi) so they could communicate more efficiently.

I suspect something similar would happen with code.

[–] nagaram@startrek.website 6 points 1 day ago (1 children)

That was a tech demo I'm pretty sure and not just a thing they do btw. A company was trying to make a more efficient sound based comms for AI(?)

[–] Z3k3@lemmy.world 1 points 1 day ago

Clearly as both sides of the conversion were in the audio. I believe my point still stands in relation to the original question. I.e. talk at the level both sides agree on