this post was submitted on 22 Dec 2025
126 points (97.7% liked)

Technology

78003 readers
2530 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] panda_abyss@lemmy.ca 43 points 4 days ago* (last edited 4 days ago) (1 children)

This was only a matter of time.

Frankly, a few American companies buying up and blocking the entire global supply of memory and chips is going to mean the Chinese fabs only have to compete on price and quantity, not quality, to dominate much of the world's chip sales.

A lot of the world will accept cheap Chinese hardware with large quantities of memory over the anemic overpriced offerings Nvidia is putting to market for 10x as much.

ETA: A Fenghua 3 with 112GB HBM is rumoured to be about $1-3k for H100 like performance, which is 1/10th the price.

[โ€“] KingRandomGuy@lemmy.world 6 points 4 days ago* (last edited 4 days ago)

What info have you heard about Fenghua 3? I'd last read that it's not strictly an AI accelerator but can actually do graphics tasks, which is neat. Would make it more of a competitor to a professional workstation card like an RTX PRO 6000.

I'm most curious about their CUDA compatibility claim. I would expect that to cause a pretty significant performance hit since when writing high-performance CUDA kernels, you generally need to specialize the kernel to the individual GPU (an H100 kernel will look quite different compared to a 4090 kernel, for example). But if in spite of that it can achieve H100 performance, that'd be cool.