this post was submitted on 06 Jan 2026
173 points (95.8% liked)

Technology

78511 readers
3181 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

This specific GPU is... Kind of a mixed bag. It's supposed to be built on a 6nm process, and the G100 is, according to Lisuan, the first domestic chip to genuinely rival the NVIDIA RTX 4060 in raw performance, delivering 24 TFLOPS of FP32 compute. It even introduced support for Windows on ARM, a feature even major Western competitors had not fully prioritized.

It appears to fall short of its marketing promises, though. An alleged Geekbench OpenCL listing revealed the G100 achieving a score of only 15,524, a performance tier that effectively ties it with the GeForce GTX 660 Ti, a card released in 2012. This places the "next-gen" Chinese GPU on par with 13-year-old hardware, making it one of the lowest-scoring entries in the modern database. The leaked specifications further muddied the waters, showing the device operating with only 32 Compute Units, a bafflingly low 300 MHz clock speed, and a virtually unusable 256 MB of video memory. We'll likely see more benchmarks as the GPU makes its way to the hands of customers.

These "anemic" figures might represent an engineering sample failing to report correctly due to immature drivers—a theory supported by the test bed’s configuration of a Ryzen 7 8700G on Windows 10. But still, if true, the underlying silicon may still be fundamentally incapable of reaching the promised RTX 4060 performance targets, regardless of the actual specifications that are being reported.

top 50 comments
sorted by: hot top controversial new old
[–] sirboozebum@lemmy.world 141 points 4 days ago (3 children)

More competition for AMD and NVIDIA, the better.

I wouldn't expect the first domestic Chinese GPU to be great but hopefully they keep iterating and get better and better.

[–] orclev@lemmy.world 34 points 4 days ago (3 children)

Sounds like it's about equivalent to Intel's latest GPU. Both are running about a little over a generation behind AMD and Nvidia. Meanwhile Nvidia is busy trying to kill their consumer GPU division to free up more fab space for data center GPUs chasing that AI bubble. AMD meanwhile has indicated they're not bothering to even try to compete with Nvidia on the high end but rather are trying to land solidly in the middle of Nvidia's lineup. More competition is good but it seems like the two big players currently are busy trying to not compete as best they can, with everyone else fighting for their scraps. The next year or two in the PC market are shaping up to be a real shit show.

[–] glimse@lemmy.world 37 points 4 days ago (3 children)

Sounds like it's about equivalent to Intel's latest GPU. Both are running about a little over a generation behind AMD and Nvidia.

Sounds like it's more than "a little over one generation behind" if it benchmarks near an Nvidia card released 14 years ago??

[–] AmidFuror@fedia.io 17 points 4 days ago

It's roughly a human generation behind.

[–] village604@adultswim.fan 6 points 4 days ago

That's likely a driver issue, per the article.

[–] orclev@lemmy.world -3 points 4 days ago (2 children)

I was basing that on the quote saying it rivals a 4060.

[–] glimse@lemmy.world 17 points 4 days ago (1 children)

According to the article, the actual performance is on par with a GTX 660 Ti

[–] orclev@lemmy.world 3 points 4 days ago

Eh, maybe. The actual performance seems to be unknown. They're assuming the geekbench score is legitimate, but there's no way to really know exactly how well it will do when it actually ships. It's probably safe to assume somewhere between the two, but either way it's not competing with current gen AMD or Nvidia cards, and might not even be competing with current Intel GPUs.

[–] Anivia@feddit.org 3 points 4 days ago (1 children)

Maybe you should read more than 1 paragraph before commenting. And in general.

[–] orclev@lemmy.world -1 points 4 days ago

Maybe you should stop assuming things before commenting. And in general. You might also want to reread the article you seem to have skipped some important details.

[–] turkalino@sh.itjust.works 11 points 4 days ago (1 children)

Nvidia is busy trying to kill their consumer GPU division to free up more fab space for data center GPUs chasing that AI bubble

Which seems wildly shortsighted, like surely the AI space is going to find some kind of more specialized hardware soon, sort of like how crypto moved to ASICs. But I guess bubbles are shortsighted…

[–] CheeseNoodle@lemmy.world 15 points 4 days ago* (last edited 4 days ago) (1 children)

The crazy part is outside LLMs the other (actually useful) AI does not need that much processing power, more than you or I use sure but nothing that would have justified gigantic data centers. The current hardware situation is like if the automobile first got invented and a group of companies decided to invest in huge mortal engines style mega-vehicles.

[–] DacoTaco@lemmy.world 3 points 4 days ago* (last edited 4 days ago) (1 children)

~~Debatable. The basics of an llm might not need much, but the actual models do need it to be anywhere near decent or usefull. Im talking minutes for a simple reply.
Source: ran few <=5b models on my system with ollama yesterday and gave it access to a mcp server to do stuff with~~

Derp, misread. sorry!

[–] CheeseNoodle@lemmy.world 4 points 4 days ago (1 children)

Yes, my whole post was that non-LLMs take far less processing power.

[–] DacoTaco@lemmy.world 4 points 4 days ago (1 children)

Oh derp, misread sorry! Now im curious though, what ai alternatives are there that are decent in processing/using a neural network?

[–] CheeseNoodle@lemmy.world 2 points 4 days ago* (last edited 4 days ago) (1 children)

So the two biggest examples I am currently aware of are googles AI for unfolding proteins and a startup using one to optimize rocket engine geometry but AI models in general can be highly efficient when focussed on niche tasks. As far as I understand it they're still very similar in underlying function to LLMs but the approach is far less scattershot which makes them exponentially more efficient.

A good way to think of it is even the earliest versions of chat GPT or the simplest local models are all equally good at actually talking but language has a ton of secondary requirements like understanding context and remembering things and the fact that not every gramatically valid bannana is always a useful one. So an LLM has to actually be a TON of things at once while an AI designed for a specific technical task only has to be good at that one thing.

Extension: The problem is our models are not good at talking to eachother because they don't 'think' they just optimize an output using an intput and a set of rules, so they don't have any common rules or internal framework. So we can't say take an efficient rocket engine making AI and plug it into an efficient basic chatbot and have that chatbot be able to talk knowledgably about rockets, instead we have to try and make the chatbot memorise a ton about rockets (and everything else) which it was never initially designed to do which leads to immense bloat.

[–] DacoTaco@lemmy.world 3 points 4 days ago

This is why i played around with mcp over the holidays. The fact its a standard to allow an ai to talk to an api is kinda cool. And nothing is stopping you from making the api do some ai call in itself.
Personally, i find the tech behind ai's, and even llm's, super interesting but companies are just fucking it up and pushing it way ti fucking hard and in ways its not meant to be -_-
Thanks for the info and ill have to look into those non-llm ai's :)

load more comments (1 replies)
[–] Appoxo@lemmy.dbzer0.com 5 points 3 days ago (1 children)
[–] goferking0@lemmy.sdf.org 2 points 3 days ago (1 children)

Didn't they drop their arc cards?

[–] Appoxo@lemmy.dbzer0.com 3 points 3 days ago (1 children)
[–] goferking0@lemmy.sdf.org 3 points 3 days ago

I thought they were dropping it completely when they changed to just have it as part of the cpu instead a discrete card

[–] SaveTheTuaHawk@lemmy.ca 10 points 4 days ago (1 children)

Name one industry the Chinese haven't beaten sooner or later. When they apply to a problem , they typically lead the world.

[–] Hadriscus@jlai.lu 15 points 3 days ago

Good, more competition, better prices, fuck nvidia. My thoughts in a bag

[–] kadu@scribe.disroot.org 47 points 4 days ago (1 children)

Given how much Intel struggled even though they had been working with GPUs for decades, I'm actually impressed with how fast and competent China's attempts have been so far. Though I have to say, from the article:

The leaked specifications further muddied the waters, showing the device operating with only 32 Compute Units, a bafflingly low 300 MHz clock speed, and a virtually unusable 256 MB of video memory. We'll likely see more benchmarks as the GPU makes its way to the hands of customers.

If none of the specs match what they're supposed to be, and are weirdly out of date, are they sure this is the same GPU? It could very well be an early prototype being tested for stability. There are some engineering sample CPUs that run at 1/100th the intended speed of the final product, for instance.

[–] teft@piefed.social 14 points 4 days ago

Just goes to show the power of corporate espionage.

[–] systemglitch@lemmy.world 33 points 4 days ago

More competition fuck yeah. We should all be excited.

[–] darkevilmac@lemmy.zip 21 points 4 days ago (2 children)

I honestly feel like our best route to competition at this point is the big players being forced to license technology to eachother and smaller companies.

The reason CPUs don't suffer from these issues nearly as badly as graphics is that Intel and AMD are effectively stuck having to share technology with eachother.

[–] Appoxo@lemmy.dbzer0.com 7 points 3 days ago

And China is unable to sell in the west due to patents on x86 architecture.

Would be interesting to see how their CPUs fare against the Intel/AMD offerings

[–] Earthman_Jim@lemmy.zip 3 points 4 days ago* (last edited 4 days ago)

How funny would it be if those dummies accidentally train an LLM that can explain to everyone else how to do it. lol

[–] ISolox@lemmy.world 22 points 4 days ago (1 children)

I'm down for the competition. 660TI performance is ROUGH though.

[–] gezginorman@lemmy.ml 2 points 3 days ago

it's a beautiful piece of hardware. i still use it after ten odd years

[–] biotin7@sopuli.xyz 9 points 3 days ago

Ohhhh they'll get better. As long as they support OpenSource Drivers.

[–] psx_crab@lemmy.zip 25 points 4 days ago (2 children)

While it was a historic milestone as the first domestic gaming card with PCIe 5.0, it struggled with immature drivers and inconsistent performance, and it failed to run modern titles smoothly.

An alleged Geekbench OpenCL listing revealed the G100 achieving a score of only 15,524, a performance tier that effectively ties it with the GeForce GTX 660 Ti, a card released in 2012.

This is the issue i have with new chinese brand(and also a lot of existing one), they always have great spec on paper but really fall short on real world use. From phone to car to bike part to computer hardware, they always love to hype up the spec for the sales, but fumbled on the user long term experience.

On the other hand, as long as you expecting Mushu when they sell you a dragon, it's a good alternative from the expensive stuff, just know what you're getting into.

[–] Anivia@feddit.org 13 points 4 days ago (1 children)

You mean my $0.99 flashlight from Aliexpress doesn't really have 100000 lumens? 😱

[–] psx_crab@lemmy.zip 2 points 3 days ago

I'm just mad my vacuum cleaner doesn't have 1200kpa suction power 😔

[–] SharkAttak@kbin.melroy.org 7 points 4 days ago

A chinese product boasting top quality and instead delivering low/mid? What a surprise.

[–] eleijeep@piefed.social 23 points 4 days ago

This article is pretty light on details, and the only number they have comes from a "leaked GeekBench score" which could be literally anything. We'll have to wait for a real tech outlet to pick up a sample and benchmark it properly.

[–] ICastFist@programming.dev 4 points 3 days ago

device operating with only 32 Compute Units, a bafflingly low 300 MHz clock speed, and a virtually unusable 256 MB of video memory

Oh, so they're going to sell like all those 2TB pendrives and 32GB RAM octa-core tablets

[–] pyrinix@kbin.melroy.org 14 points 4 days ago

It appears to fall short of its marketing promises, though

Figures.

[–] BlackLaZoR@fedia.io 15 points 4 days ago

Can't wait for Gamers Nexus to get their hands on this one

[–] demonsword@lemmy.world 7 points 4 days ago

An alleged Geekbench OpenCL listing

this does not look very legitimate to me

[–] MonkderVierte@lemmy.zip 2 points 4 days ago (1 children)

Hardware needs support for the OS? Isn't that putting the bag in the cat?

[–] theneverfox@pawb.social 8 points 4 days ago (1 children)

Driver's? They literally drive the hardware your computer is using

[–] MonkderVierte@lemmy.zip 1 points 4 days ago (1 children)

That's software supporting the hardware. Cat in the bag.

[–] theneverfox@pawb.social 3 points 4 days ago

Okay, let's say you have a fuel injected car, but instead of using the 02 sensor to decide the fuel air mixture, it just squirts the same amount of gas every time.

The hardware might be able to achieve 400 hp, but the software means it only ever achieves 50 hp

It's like that. The software drives the hardware. It doesn't matter how good the hardware is, the software is the brain of the operation - if the software doesn't know how to utilize the hardware properly, you're going to have piss poor performance

[–] Alaknár@piefed.social 1 points 4 days ago (1 children)

Hey, OP, why did you write that this GPU is "the first (...) to rival the Nvidia RTX 4060 in raw performance"?

It appears to fall short of its marketing promises, though. An alleged Geekbench OpenCL listing revealed the G100 achieving a score of only 15,524, a performance tier that effectively ties it with the GeForce GTX 660 Ti, a card released in 2012

It's nowhere near that, as tests show.

[–] kadu@scribe.disroot.org 12 points 4 days ago (1 children)

OP didn't, it's an extract from the article.

[–] atrielienz@lemmy.world 4 points 4 days ago (1 children)

It's not clear that the except is a quote. No quotation marks. No vertical bar denoting quotation. The ellipses at the very start of the first sentence.

[–] Jrockwar@feddit.uk 6 points 4 days ago

I think it's ok, the comment literally says "according to Lisuan". Which I see as factually correct - that's the marketing claim, or the performance according to them, just like Teslas have been self-driving according to Tesla since 2012.

load more comments
view more: next ›