Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
There were supposed to be cheap GPUs after the crypto bubble burst
Also the AI GPUS probably won't be great for gaming. And cheap could mean anything when they go for 20k a piece.
There were when the first Ethereum bubble burst. That was one easier for the average person to get into with gamer GPUs, and they flooded the market on eBay as soon as it was no longer profitable.
Bitcoin won't do that, because it hasn't been based on GPUs for a long, long time. Ethereum doesn't even work like that anymore.
The AI bubble popping will only flood the market with GPUs that are useful for running AI models. The GPUs in AI datacenters often don't even have a display output connector. I think Corey is overstating his case on that one. Most likely, those GPUs are headed to the landfill.
The AI bubble doesn't mean AI/LLMs aren't useful. It means datacenter speculation can't make money.
those GPUs are headed to the landfill.
They'll just have a similar discount to the Ethereum switch.
You can still use such GPU as an accelerator either for running AI, or for gaming. In either case, given that you workload is Vulkan-based on Linux, you can use vkdevicechooser.
Of course, you will need a second GPU (even the CPU's integrated one) to connect your display(s).
That has never worked well. It might give high average framerates on paper, but it introduces jitter that produces a worse overall experience. In fact, Gamers Nexus just came out with a video on a better way to measure this, and it touches on showing the problem with multi-GPU setups:
I think that you misunderstood my comment.
The video shows how SLI makes the frame pacing more inconsistent, which is a known issue when multiple GPUs work together to solve the same problem.
What I am talking about is more like Nvidia Optimus. This is a common technology on laptops, where the display is connected to the low power iGPU, while games can use the dedicated Nvidia chipset.
I don't know about potential frame pacing issues on these technologies, and it seems like it was not addressed in the video either. However, I know that newer laptops have a switching chip that connects the display to the dedicated GPU, which, I think, aims on lowering the latency.
I can strongly recommend the arricle from the OP blog post about marker dynamics and use of what is essentially accounting fraud by major companies involved in AI:
Lifespan of AI Chips: The $300 Billion Question
I am looking forward to reading the research paper they are working on.
While the author takes a relatively neutral tone, the analysis is brutal in its portrayal of major market players (Nvidia, Microsoft, Amazon, Google); they come of more as oligopolists who are happy to engage in what is de facto in an attempt to undermine true market competition.
OP's post is largely right, but it doesn't require that link to be true. Also, whether these $3m+ systems are warrantied is a relevant question. It's hard to know exact lifespan from one person saying their gpu failed quickly. Paper still stands well.
Because of power constraints, I'd expect they replace GPUs every 2 years with new generations, and so there will be big write offs.
A 6 year depreciation schedule seems unrealistically long for a GPU.
Even in gaming terms (I know this is completely different use case), a 2080S from 2019, a high end SKU, would struggle with many modern games at 1440p and higher. A profession streamer would be unlikely to use a 2080S.
Then there is the question of incentives. An objective look at American technology and VC suggests they are far closer to criminal organizations than their treatment by media and US institutions would imply. They very much can be expected to engage in what is essentially accounting fraud.
a 2080S from 2019, a high end SKU, would struggle with many modern games at 1440p and higher. A profession streamer would be unlikely to use a 2080S.
On one hand a 2080s would still be good at doing what it was doing 6 years ago. If there are new needs, and unlimited power availability, then a new card in addition to whatever AI workload the 6 year old GPU can do in addition to the new card makes sense... if that card still works. Selling your 2080s or whatever old card, does mean a fairly steep loss compared to original price, but 6 year depreciation schedule is ok... IF the cards are still working 6 years later.
$3m NVL72 systems are a bit different, as one out of 72 cards burning out can screw up whole system, and datacenter power structure and expertise requirements, would have low resale value, though I assume the cards can be ripped out and sold individually.
They very much can be expected to engage in what is essentially accounting fraud.
Oracle this week "proudly boasted" that they get 30% margins on their datacenter, and stock went up. This is not enough, as it is just 30% over electricity costs. Maintenance/supervision, and gpu costs/rentals don't count, and it is unlikely that they are profitable, though it's not so much accounting fraud as it is accounting PR.
One thing I worry about is that there‘s going to be a fire sale on the polluting crap that are powering these GPU farms. It’ll likely end up in poorer countries because it’ll be cheaper than new renewables.
The Internet has already been mostly destroyed, drowned in AI slop. Is all that shit gonna be taken down? Are search engines going to go back to working again?
They will ruin it, and then move onto the next thing to subsume and destroy forever.
Dude this. Looking up how to pull off pci passthrough on an SBC I have as well as answer a few lingering filesystem questions I get nothing but slop. The useful shit isn’t even visible anymore. And if I ask chatGPT to sift through it all, it can’t do it either, instead regurgitating all the slop it can’t make sense of either.
We are looking at the destruction of the greatest library in mankind’s history. Because NVIDIA’s line must go up.
I was going to reply "at least the burning of Alexandria was an accident," and then I thought to look that up. Seems egotists destroying public collections of knowledge is just baked into humanity. We'll never be free of its scourge.
Local archives of Wikipedia and Project Gutenberg have never been a better idea.
nope, its useful for propaganda still, especially dimwits like conservatives that cant tell the difference.
Please let the pop take the tech bros with it.
They'll move on to the next big thing, just like they did after bitcoin.
And after NFTs, blockchain, the metaverse and so on
The metaverse was/is the stupidest thing ever. They couldn't even sell it to the average person. I'm into tech but even I can barely describe what its supposed to be. Is it just VR? VR office work? 🤷
I wish. I suspect the mega-billionaires will be absolutely fine.
Someone think of the "prompt engineers"!
Hmm and what about the everyday user who needs to ask AI how long to cook potatoes? What will they do after the pop?
completely uncharted territory. No one tried to cook a potato until Sam Altman graced us plebs with ChatGPT
Local models are actually pretty great! They are not great at everything... but for what most people are using LLMs for they do a fine job.
Thats from llama3.1:8b and the answer is decent, it took about 20seconds to generate my answer and used no more power than if I were to play a video game for the same amount of time.
Boiling time isn't related to original potato size, it's related to the size of pieces you cut. So the first half is irrelevant and the second half is overly verbose.
What does it take to pop this bubble? So many people are calling it a bubble but what actually makes it pop?
As stupid as it is: Faith is what keeps bubbles afloat. Faith can go a long way towards forcing reality to what you want it to be, and if you have the wealth, you can play nearly endless money-games to make it seem like you're ahead when you're actually losing your shorts.
The reason there is so much faith is because this is a make it or break it moment for late stage capitalism. The businesses (including non-AI businesses) viscerally need it to work so they can get rid of human workers. If they can't make humans slaves, they will make digital slaves. This may be a last gasp for the old order if it fails because so many entrenched companies from automobile makers like GM and Ford to airframe makers like Boeing to general electronics like General Electric have finances that are literally upside down because they have been using stock buybacks to fake growth for the better part of two decades now absolutely need it to happen to stay afloat.
It is difficult to get a man to understand something, when his salary depends on his not understanding it.
This quote by Upton Sinclair is usually used to describe lower level employees who don't understand how unionization could be good for them, but it applies here as well. The faith persists because this is their "salaries" that depend on this working so the bottom doesn't fall out from under them. They have to believe it will work and as such will keep dumping money into it as long as humanly possible.
AI is like Theranos but bigger and affecting numerous industries who are all betting the future of their companies on this all working out. For their livelihoods and their plan to continue ignoring all the little people in the world, there is no losing state they can or will accept until they are on the edge and about to leap from the top of their buildings to avoid the consequences.
Once the faith breaks, it will be like a dam breaking and flooding out too fast to escape.
Venture capital drying up.
Here's the thing... No LLM provider's business is making a profit. None of them. Not OpenAI. Not Anthropic. Not even Google (they're profitable in other areas, obviously). OpenAI optimistically believes it might start being profitable in 2029.
What's keeping them afloat? Venture capital. And what happens when those investors decide to stop throwing good money after bad?
BOOM.
OpenAI optimistically believes it might start being profitable in 2029.
Which is absolutely buck wild when you consider they've already signed contacts to spend another trillion dollars over the next five years.
How the fuck is a company that has $5 billion in revenue today going to grow that revenue by at minimum $995 billion by 2029? There's just no fucking way, man...
On top of that, there's so much AI slop all over the internet now that the training for their models is going to get worse, not better.
To an extent, I think that's already happening. ChatGPT5 released with a huge amount of hype, but when users started playing with it, it was incredibly underwhelming — and flat-out worse than 4 in many cases... all while burning though even more tokens than ever. Definitely seems like that capabilities of this technology have hit a plateau that won't be solved with more training.
I'm a software developer and my company is piloting the use of LLMs via Copilot right now. All of them suck to varying degrees, but everyone's consensus is that GPT5 is the worst of them. (To be fair, no one has tested Grok, but that's because no one in the company wants to.)
I really have no idea how anyone is supposed to take this world seriously anymore.