this post was submitted on 15 Dec 2025
59 points (100.0% liked)
GenZedong
4985 readers
28 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Generally speaking, don't take it at its word and consider it like it's an assistant, or even just an ideas machine.
Example: I've used an LLM before when there's a term I can't think of the name for. I describe what I can remember about the term and see what it comes up with. Then, if it gives me something concrete to work with (e.g. doesn't go "I don't know" or something), I put that into a web search and see what comes up. I cross-reference the information, in other words. Sometimes the AI is a little bit off but still close enough I'm able to find the real term.
Cross-referencing / sanity checks are important for LLM use because they can get deep into confidently wrong rabbit holes at times, or indulge whatever your train of thought is without having the human capability to extricate itself at some point. So whether it's a web search or checking something it said to you against another real person, you can use this to ground yourself more so on how you're engaging with it. It's not that different from talking to other real people in that way (the main difference is I would recommend having a much stronger baseline skepticism of anything an LLM tells you than with a person). Even with the people we trust the most in life, it's still healthy to get second opinions, get perspective beyond them, work through the reasoning of what they've said, etc. No one source, computer or human, knows it all.