Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
I fucking hate when people ask an LLM "what were you thinking" because the answer is meaningless, and it just showcases how little people understand of how they actually work.
Any activity inside the model that could be considered any remote approximation of "thought" is completely lost as soon as it outputs a token. The only memory it has is the context window, the past history of inputs and outputs.
All it's going to do when you ask it that, is it's looking over the past output and attempting to rationalize what it previously output.
And actually, even that is excessively anthropomorphizing the model. In reality, it's just generating a plausible response to the question "what were you thinking", given the history of the conversation.
I fucking hate this version of "AI". I hate how it's advertised. I hate the managers and executives drinking the Kool-Aid. I hate that so much of the economy is tied up in it. I hate that it has the energy demand and carbon footprint of a small nation-state. It's absolute insanity.
We need to keep educating people about it, and I found a really good method - explain it to them using the Chinese Room thought experiment.
In short: imagine you're in a room with a manual and infinite writing utensils, but nothing else. Every now and again a piece of paper with some Chinese characters is slipped through a slit in the wall. Your task is to get that paper, and - using the provided manual - paint other Chinese characters on another piece of paper. Basically, "if you see X character, then paint Y character". Once you're done, you slip your paper through the slit to the other side. This goes on back and forth.
To the person on the other side of the wall, it seems like they're having conversation with someone fluent in Chinese, whereas you're just painting shapes based on provided instructions.
I found that this kind of opens people's minds to what LLMs actually do - which is writing words that have the highest probability of following previous words and the context of the prompt...
Where are you guys seeing ads for AI? I've literally seen zero ever unless we start redefining news articles as ads which still wouldn't make any sense as 99% of them that I come across are critical of AI rather than praising it.