this post was submitted on 02 Feb 2026
42 points (100.0% liked)

PC Master Race

19925 readers
51 users here now

A community for PC Master Race.

Rules:

  1. No bigotry: Including racism, sexism, homophobia, transphobia, or xenophobia. Code of Conduct.
  2. Be respectful. Everyone should feel welcome here.
  3. No NSFW content.
  4. No Ads / Spamming.
  5. Be thoughtful and helpful: even with ‘stupid’ questions. The world won’t be made better or worse by snarky comments schooling naive newcomers on Lemmy.

Notes:

founded 2 years ago
MODERATORS
 

With AI seeming to consume all resources for hardware, I’m wondering what parts of those current systems we could see trickling down into componentry for desktop PC’s as they get outdated for AI tasks.

I know most of this hardware is pretty specific and integrated, but I do wonder if an eventual workaround to these hardware shortages are through recycling and repurposing of the very systems causing the shortage. We have seen things like dram, flash, and even motherboard chipsets be pulled from server equipment and find its way into suspiciously cheap hardware on eBay and AliExpress, so how much of the current crop of hardware will turn up there in the future?

How much of that hardware could even be useful to us? Will nvidia repo old systems and shoot them into the sun to keep it out of the hands of gamers? Perhaps only time will tell

you are viewing a single comment's thread
view the rest of the comments
[–] empireOfLove2@lemmy.dbzer0.com 20 points 1 day ago (9 children)

Memory and CPU's are about it.

GPU's have all shifted to bespoke hardware that is physically impossible to run on consumer hardware platforms. All the Blackwell etc type chips are insanely dense. Most GPU's built for datacenter use don't even have video output hardware so it's somewhat useless.

Memory (DIMM's) are somewhat standard. Most servers use registered ECC which doesn't work in consumer platforms, but the actual memory chips themselves could be removed and replaced onto normal consumer DIMM's as they are basically univsersal.

x86 CPU's are still CPU's at least. You might need weird motherboards but those can still be run by us plebs.

[–] jj4211@lemmy.world 2 points 1 day ago (2 children)

The server boards would pretty much have to come with them. Also, and if those cpus go as high as 500W, and as a result a lot of homes might not have a powerful enough socket to power them. Even without GPUs, might need something like a dryer outlet to realistically power.

[–] CameronDev@programming.dev 1 points 22 hours ago (1 children)

500w isn't that high, sockets in Aus can push out 2000w+ without any issues, and your not gonna spend 1500w on the rest of the system.

Quick google suggests USA can do 1800w with a 15a circuit, or 2400w with 20a circuit, so plenty of headroom there as well.

Remember people plug space heaters into sockets, and those will out draw even a high end CPU easily.

[–] jj4211@lemmy.world 1 points 21 hours ago

Keep in mind these are dual socket systems, and that's CPU without any GPU yet. So with the CPUs populated and a consumer-grade high end GPU added, those components are at 1500W, ignoring PSU inefficiencies and other components that can consume non-trivial power.

For USA, you almost never run a 20A circuit, most are 15A, but even then that's considered short term consumption and if you run over a longer term it's supposed to be 80%, so down to 1440W. Space heaters usually max out at 1400W in the USA when expected to plug into a standard outlet because of this. A die-hard enthusiast might figure out how to spread non-rendundant multiple PSUs across circuits, or have a rare 20A circuit run, but it's going to be a very very small niche.

load more comments (6 replies)