this post was submitted on 15 Dec 2025
59 points (100.0% liked)

GenZedong

4985 readers
28 users here now

This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.

See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.

This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.

We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.

Rules:

founded 5 years ago
MODERATORS
 

Hello and welcome to our community tradition of a weekly discussion thread. Before we begin we here at GZD would like to wish Comrade Joseph Vissarionovich Stalin a happy birthday. Now please join us in singing for he is a jolly good fellow as we cut the cake!

Matrix homeserver and space
Theory discussion group now on Lemmygrad
• Find theory on ProleWiki, marxists.org, Anna's Archive, libgen

you are viewing a single comment's thread
view the rest of the comments
[–] amemorablename@lemmygrad.ml 2 points 3 days ago

Generally speaking, don't take it at its word and consider it like it's an assistant, or even just an ideas machine.

Example: I've used an LLM before when there's a term I can't think of the name for. I describe what I can remember about the term and see what it comes up with. Then, if it gives me something concrete to work with (e.g. doesn't go "I don't know" or something), I put that into a web search and see what comes up. I cross-reference the information, in other words. Sometimes the AI is a little bit off but still close enough I'm able to find the real term.

Cross-referencing / sanity checks are important for LLM use because they can get deep into confidently wrong rabbit holes at times, or indulge whatever your train of thought is without having the human capability to extricate itself at some point. So whether it's a web search or checking something it said to you against another real person, you can use this to ground yourself more so on how you're engaging with it. It's not that different from talking to other real people in that way (the main difference is I would recommend having a much stronger baseline skepticism of anything an LLM tells you than with a person). Even with the people we trust the most in life, it's still healthy to get second opinions, get perspective beyond them, work through the reasoning of what they've said, etc. No one source, computer or human, knows it all.