this post was submitted on 25 Apr 2026
44 points (84.4% liked)

Selfhosted

56957 readers
490 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm looking to build a low-end ollama LLM server to improve home assistant voice control, Immich image recognition and a few other services. With the current cost of hardware components like memory, I'm looking to build something small, but somewhat expandable.

I have an old micro-atx form factor computer that I'm thinking will be a good option to upgrade. I'd love recommendations on motherboards, processors, and video card combos that would likely be compatible and sufficient to run a decent server while keeping costs lower, basically, the best bang for the buck. I have a couple of M.2 SSDs I can re-purpose. Would prefer the motherboard has 2.5Gbit Ethernet, but otherwise I'm open.

Also recommendations on sites to purchase good quality memory at reasonable prices that ship to the US. I'd be willing to look at lightly used components, too.

Any advice on any of these topics would be greatly appreciated. The advice I've found has all been out of date especially with crypto fading so video cards are not as expensive, but LLM data centers eating up and reserving memory before it's even manufactured.

you are viewing a single comment's thread
view the rest of the comments
[–] chrash0@lemmy.world 13 points 2 days ago (4 children)

honestly it’s hard to beat Macs these days in this space for two reasons:

  • unified memory means that you don’t have to load up on RAM just to load the model and then also shell out for a video card with barely enough VRAM to fit a basic language model
  • their supply chain is solid and has mostly avoided the constraints that other OEMs and parts manufacturers are struggling with

pricing is tough. sure, crypto is on its way out, but GPUs are still the platform of choice for most neural net workloads (outside of SoCs like Apple M-series). i built a PC in late 2024, and it’s easily worth twice what i paid for it.

[–] irotsoma@piefed.blahaj.zone 8 points 2 days ago (3 children)

Yeah,but I dont want to get locked into a proprietary OS or have to put a lot of effort into hacking it to run Linux.

[–] WASTECH@lemmy.world 1 points 1 day ago

I haven’t looked into Asahi Linux in a while now, but I figured the experience would be pretty good by now. You don’t need to “hack” anything to get it to run. Last I read, there were just a few driver issues, but I haven’t looked into it in probably 2-3 years now.

[–] chrash0@lemmy.world 5 points 2 days ago

super fair. i am a Linux guy normally. i’m just being honest. i wish there was a better more open alternative.

if you want to go with the Linux alternative it’s going to cost. get at least 32GB of RAM and at least a 4090 to run the kind of models you’re asking for. it’s the way she goes

[–] ryokimball@infosec.pub 3 points 2 days ago

The apple silicon is more energy efficient but the latest Intel and AMD CPUs deliver more processing power and can also share a significant amount of RAM to the GPU / AI components.

[–] Scipitie@lemmy.dbzer0.com 2 points 2 days ago (1 children)

Depends what you want to do... For example I didn't get python whisper in a container to run on Mac in any way that can be called "performance" and I don't want my dev workflow to optimize for an OS I despise :D

[–] chrash0@lemmy.world -1 points 2 days ago (1 children)

in a container

well there’s your issue. i get not liking the OS, but actively crippling your project will cripple your project.

containers on macOS do kinda suck

[–] Scipitie@lemmy.dbzer0.com 2 points 1 day ago (1 children)

That's sich a Mac answer it's unbelievable.

Describing "A project aimed to be agnostic of it's environment" as a design mistake and not a inherent flaw of the OS is... Just wow.

Remember in this thread it's about the pro and con of Macos as interference hardware. This is a major flaw which comes baked into the hardware. I tested it and find it an unacceptable limitation. It's important for others to know.

To state "containerization is the issue" though... Just wow.

[–] JadedBlueEyes@programming.dev 2 points 1 day ago (1 children)

Unfortunately containerisation on macos usually means running virtualized Linux, which of course is going to add overhead and cut off access to apple APIs and some hardware. So yep. There's plenty that runs natively.

[–] chrash0@lemmy.world 2 points 1 day ago

thanks for clarifying. it was hard for me to dignify such a comment with a response.

you’re also going to run into hardware acceleration issues trying to run Metal acceleration with a Linux kernel. i don’t really see a need to containerize these workloads these days anyway with tools like uv.

it’s a big pain in my ass at times trying to do web dev work with an aarch64-darwin dev env vs the target x86_64-linux. adding in hardware acceleration issues just sounds painful.

i also just personally don’t like containers. feels like bludgeon of a solution.

[–] curbstickle@anarchist.nexus 6 points 2 days ago

Going to second this, its all my m2 does right now. Putting together a solution for the office with some m4s.

Its a lot of bang for the buck specifically for llm use despite being horribly overpriced otherwise.

[–] irmadlad@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

i built a PC in late 2024, and it’s easily worth twice what i paid for it.

spoiler

I wrote the vendor and asked him if the decimal was in the right place or was this the model that was beta testing alien technology. Got to be a misprint.