this post was submitted on 11 Mar 2026
22 points (89.3% liked)

GenZedong

5125 readers
144 users here now

This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.

See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.

This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.

We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.

Rules:

founded 5 years ago
MODERATORS
 

(image from a netizen on b2 lmfao)

I personally use Kimi K2.5 the most as it's quite well-rounded and they have a good mobile app.

My use case is extremely boring: troubleshooting game mods, searching, summarising, brainstorming, etc. I have experimented with openclaw using K2.5 which is pretty dope but it’s very unreliable, but it did save me a few hours of work by organizing my files.

At some point when I upgrade my computer I’m going to try to switch to local models exclusively.

top 16 comments
sorted by: hot top controversial new old
[–] LVL@lemmygrad.ml 2 points 13 hours ago

I've been using Kimi for a while but recently they just don't allow free users to use the thinking model so I've been looking at alternatives. I did have 1 month of their membership for $2 using that deal in the app.

[–] big_spoon@lemmygrad.ml 1 points 13 hours ago (1 children)

i use only deepseek, but it seems that people in the comments say that is outdated...i use it to answer some questions like "tell me about the most mentioned god in lovecraft", "transcript this pdf", "tell me where the hell the labubus come from"...i think it's kinda useful. there's another one who works better than the western money siphons?

[–] LVL@lemmygrad.ml 1 points 3 hours ago

There are a couple of different ones. Kimi K2.5, Qwen (bunch of models but the website defaults to their most powerful), and GLM-5.

[–] PoY@lemmygrad.ml 7 points 21 hours ago* (last edited 20 hours ago)

Yes, I use Minimax M2.5 and GLM-5 both. GLM I use with openclaw, and it does things like track news updates and whatever random stuff I want to play around with. I also used it to help make a podcast/media playing app for my ipad because all the decent ones on the app store have data tracking shit in them.

MiniMax I use for any random questions, and it also has helped me fix up some open source apps I use, and it also helped with the ipad app.

Oh also someone mentioned Qwen. I use that on my phone, which you can't download from the play store because 'muh free trade', and I also use it for any random webchat question stuff. It helped me find a good hotel for my upcoming trip with a laundry list of preferences.

[–] pcalau12i@lemmygrad.ml 9 points 23 hours ago* (last edited 23 hours ago)

I use Qwen. I have a local instance running on my own AI server I use for proof reading and correcting typos and such and language translation. Its also helpful with Linux and coding questions.

I also use the web version because it's pretty good with parsing documents so you can upload a PDF and have it either find something in it or break it down for you and help you understand it. It's also good with math so I have asked it to help solve certain equations for me or to derive certain equations/formulas I needed.

[–] davel@lemmygrad.ml 5 points 21 hours ago (1 children)

I’m still an LLM luddite, but I hear DeepSeek & Qwen mentioned often.

[–] PoY@lemmygrad.ml 5 points 20 hours ago (1 children)

Deepseek is kinda old now, they need to do some updating, which they're supposed to do any day now, but until then I'd probably steer clear of it because it's quite outdated and gives a lot of wrong answers to stuff currently.

Qwen is fabulous for me though.

[–] davel@lemmygrad.ml 4 points 20 hours ago (2 children)

Are LLM years even faster than dot-com years were, or am I, a dotard, slowing down?

[–] PoY@lemmygrad.ml 5 points 16 hours ago

yeah for sure.. Deepseek was released only a year ago and it's already way outdated

[–] Loki@lemmygrad.ml 6 points 20 hours ago (1 children)

Oh like 10x faster at least, and by how much faster is basically doubling every year, there’s been more AI progress in the last two months than the entire year of 2023

[–] DonLongSchlong@lemmygrad.ml 1 points 1 hour ago

As someone that did not follow AI at all, besides reading about it while scrolling by, what does "AI progress" look like? More application methods? Or just "better"?

[–] PoY@lemmygrad.ml 4 points 20 hours ago* (last edited 20 hours ago) (2 children)

I dunno what you're planning to upgrade your computer to, but I have a 5090 and 96gb of ram and I refuse to use local models for most things, except TTS and image/video generation. They're just too damn limited and slow.

[–] LVL@lemmygrad.ml 2 points 3 hours ago (1 children)

What image/video models are you using? I've recently got into messing around with that and mostly just been using Z-Image-Turbo, Flux Klein 9B, and Wan 2.2 I2V.

[–] PoY@lemmygrad.ml 1 points 22 minutes ago

yep pretty much the same. those are kind of the hot models

[–] Loki@lemmygrad.ml 5 points 20 hours ago (1 children)

My plan is a M5 Max MacBook Pro with 128 gigs of ram, reportedly it runs Qwen 3.5 122B at 60TP/S

It has essentially 56 tensor cores

[–] PoY@lemmygrad.ml 1 points 16 hours ago

oh nice, you should update us on how well it works!