Instead of doing this bullshit, can we just have regular DLSS be actually good? I can't stand turning it on for my handheld because it's a blurry, smeary mess as is.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
So they have this nice 3D card, which they had a hand in inventing and "perfecting" to render the entire 3D scene in beautiful, stunning detail, and then another card with AI instructions that totally ignores all of that just happened, takes a screenshot and puts a filter on it in real time basically. What a massive waste of power and computation.
Further confirming this is not meant to ever be used by actual gamers, and instead exists only to advertise real time genAI modification to existing video media.
Best comment about this was from a video posted yesterday:
Nvidia keeps saying that this tech is still a work in progress, yet they made the decision to release a demo in its current state...
GenAI is the ultimate demoware. Bro, it’ll get better. Just look how good it is now.
Just one more data center, bro! Promise!
Can't tell if serious.
As microslop was constantly saying last year, LLMs and their ilk are a product in search of an application.
Every company is desperate to find anything these garbage machines can do well enough to validate the trillion or so dollars pumped into them.
Well, if it isn't little Lisa Slopson! The tech bros answer to a QUESTION NO ONE ASKED!?
That's clearly the insane part, like okay it can be a bit helpful in this or that scenario, but they spent like every person on earth would want to pay 250 euros a month for it...
Nslopia
Demos are very often an example of in progress works or technology. That literally happens all the time.
Doesnt really matter IMO. If you have known bugs and flaws you dont showcase those, or if they are present in the showcase you atleast adress them and show what is to be expected upon release. NVIDIA just flat out didnt care. As soon as motion increases the artefacting is crazy. How do you even decide that this is remotely good enough for a demo?


Okay but that is not what the person said or what the poster above quoted as being the best part. I'm not commenting on the overall performance I'm just saying that demos very often are exactly what that sentence implies they shouldn't be.
Nvidia hears people like motion blur and AI slop so they put some AI slop in their motion blur.
Ugh. "Everyone is doing BLOOM, lets also do BLOOM but at +150% more!"
I remember that, motion blur came after and now I guess ai 😓
:3
PS: The hallucinations are artistic freedom 😂
"Hallucinations" are an inherent part of the programming.
It is literally impossible to prevent them. The systems work on building the fuzzy average response to a query via complex statistics. There is no thinking or creativity.
And yet, they chose to demo a broken technology with obvious bugs and flaws. The demos from tech companies are supposed to make people excited, not recoil in disgust.
This isn't some tiny company, either. It's fucking nVidia, who supposedly has the money to create a good demo.
Wasn't DLSS working fine before, wtf did they do to it?
The waste is the point.
It needs to be more expensive, because that can be leveraged for higher valuations.
Haven't you heard? Everything must contain generative AI now.
DLSS stands for "deep learning super scaling." It was always gen-ai. Those extra details weren't being revealed, they were being generated.
While true, the way DLSS 2/3/4 does it is to take a bunch of low res renders of the game over time while wiggling the camera very slightly, and stitch them all together to generate a new, higher res image that very closely matches what the original would have looked like. The GenAI part is essentially just a very advanced temporal blending function that's really good at detecting and smoothing out edges.
DLSS 5 then runs an AI Instagram filter on top of the frame for "enhanced visuals", because obviously we want our games to look like cheap AI slop.
Wtf did they do to it.
✨AI ✨
But it was working fine and probably cheaper, this makes it worse. Where the fuck is QA?
Where the fuck is QA?
✨ They replaced them with AI ✨
"Those responsible for sacking the people who have just been sacked have been sacked."
img2img slop filter for every frame in real time. Great job nvidia what a dumb waste of resources.