TBH you should fold this into localllama? Or open source AI?
I have very mixed (mostly bad) feelings on ollama. In a nutshell, they're kinda Twitter attention grabbers that give zero credit/contribution to the underlying framework (llama.cpp). And that's just the tip of the iceberg, they've made lots of controversial moves, and it seems like they're headed for commercial enshittification.
They're... slimy.
They like to pretend they're the only way to run local LLMs and blot out any other discussion, which is why I feel kinda bad about a dedicated ollama community.
It's also a highly suboptimal way for most people to run LLMs, especially if you're willing to tweak.
I would always recommend Kobold.cpp, tabbyAPI, ik_llama.cpp, Aphrodite, LM Studio, the llama.cpp server, sglang, the AMD lemonade server, any number of backends over them. Literally anything but ollama.
...TL;DR I don't the the idea of focusing on ollama at the expense of other backends. Running LLMs locally should be the community, not ollama specifically.