Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
Did you miss that open ai released the oss model few days prior to gpt5?
Larger model: https://huggingface.co/openai/gpt-oss-120b Smaller model: https://huggingface.co/openai/gpt-oss-20b
They seem to be quite good
Not saying that openai would be the good guys here, but I believe they are realizing that they are behind on local models.
I haven’t played with it too much yet but Qwen 3 seems better than GPt-OSS
In my limitrd testing it seemed relatively trigger-happy on refusals and the results were not impressive either. Maybe on par with 3.5?
Although it is fast at least.
Nah, I tried 20B and a bit of 120B. For the size, they suck, mostly because there's a high chance they will randomly refuse anything you ask them unless it STEM or Code.
...And there are better models if all you need is STEM and Code.
Look around localllama and AI communities, they're kinda a laughing stock even more than Llama 4.
https://mk.absturztau.be/notes/ab3gv6iygjam02uj