this post was submitted on 09 May 2026
842 points (99.6% liked)

Programmer Humor

31374 readers
369 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] palordrolap@fedia.io 41 points 3 days ago (1 children)

Something somewhere was definitely doing the conversion for you, but it could have been your editor, the compiler or something in between like a C preprocessor directive getting loaded in by your configuration.

[–] 418_im_a_teapot@sh.itjust.works 6 points 3 days ago (1 children)

I'd be pissed if it was my editor. A compiler used on a global scale would make sense.

[–] raspberriesareyummy@lemmy.world 21 points 3 days ago (1 children)

Nah, I would absolutely want my compiler to error out hard on characters that are not allowed per the standard.

[–] mkwt@lemmy.world 3 points 2 days ago* (last edited 2 days ago) (1 children)

In C and C++, the source character set is implementation defined. This means that each compiler sets its own rules about what characters are accepted. For example compilers could choose to accept ASCII or EBCDIC or Unicode, or some combination, etc.

So the ISO standard will say that ; character is the end of statement punctuation. But it is up to the compiler to say which character(s) or code point(s) represent the ISO ;.

The ISO standards also require compilers to define a separate execution character set to specify values that can be stored in char and used with the string library functions. The execution character set doesn't have to be the same as the source character set.

Edit: I should also mention that the rules for this stuff are changing a lot in ISO C23 and C++23. (Which standards I haven't yet personally adopted.) Basically the ISO 23 standards mandate compilers to support UTF-8 source files, and they map every source character in the ISO standard to its corresponding Unicode character.

[–] raspberriesareyummy@lemmy.world 2 points 2 days ago (1 children)

Mhh today I learned. That's wild. I would have thought that any sane person would allow only 7-bit ASCII for the source code, and forward-compatible character sets in strings (every standard iteration being allowed to add characters, but not remove them).

[–] mkwt@lemmy.world 3 points 2 days ago (1 children)

At the time that C was designed, ASCII was not a universal standard. It was one encoding competing with other encodings.

Ok that's a fair point I had overlooked. Thanks for explaining.