tanka

joined 5 years ago
[โ€“] tanka@lemmy.ml 2 points 2 hours ago (1 children)

I did just update my post with the specs. Maybe it takes a while to federate?

[โ€“] tanka@lemmy.ml 2 points 2 hours ago

No problems per se. I just thought that I had not checked for an update for a longer time.

27
submitted 5 hours ago* (last edited 3 hours ago) by tanka@lemmy.ml to c/selfhosted@lemmy.world
 

Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I'm not so up to date with the current self-hosted LLMs and since the model I'm using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.

Edit:

Specs:

GPU: RTX 3060 (12GB vRAM)

RAM: 64 GB

gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)