tal

joined 2 years ago
[–] tal@lemmy.today 1 points 1 month ago* (last edited 1 month ago)

So, I agree that it's not the best presentation, but they're trying to put the summary of findings up top. The actual "title" of the chart is the subtitle beneath.

[–] tal@lemmy.today 151 points 1 month ago* (last edited 1 month ago) (17 children)

Sixteen percent of GDP...The United States has tethered 16% of its entire economic output to the fortunes of a single company

That's not really how that works. Those two numbers aren't comparable to each other. Nvidia's market capitalization, what investors are willing to pay for ownership of the company, is equal to sixteen percent of US GDP, the total annual economic activity in the US.

They're both dollar values, but it's like comparing the value of my car to my annual income.

You could say that the value of a company is somewhat-linked to the expected value of its future annual profit, which is loosely linked to its future annual revenue, which is at least more connected to GDP, but that's not going to be anything like a 1:1 ratio, either.

[–] tal@lemmy.today 46 points 1 month ago* (last edited 1 month ago) (15 children)

https://www.pewresearch.org/short-reads/2025/09/16/how-religious-is-your-state/

Religious profile of Mississippi

1st most religious state overall

61% (1st) say religion is very important in their lives

54% (1st) say they attend religious services at least monthly

62% (1st) say they pray daily

74% (1st) say they believe in God or a universal spirit with absolute certainty

50% in Mississippi are highly religious, based on an overall scale of religiousness

There was a much easier and better choice than Russia if they wanted to up their "more Christian environment" game.

[–] tal@lemmy.today 70 points 1 month ago* (last edited 1 month ago) (12 children)

The couple yearned to live in a place that shared their “Christian values” and where they “weren’t going to be discriminated against” as white, politically-conservative Christians.

https://www.pewresearch.org/religion/2018/06/13/how-religious-commitment-varies-by-country-among-people-of-all-ages/

https://lemmy.today/pictrs/image/e6f791bf-9311-421e-b2c4-6856c7922b77.webp

Religion very important to them

US: 53%

Russia: 16%

https://lemmy.today/pictrs/image/5914b607-bdf5-4238-883e-46fd51f4c52a.webp

Weekly worship attendance

US: 36%

Russia: 7%

Aside from Poland, where 42% of respondents attend weekly, every other European country in this analysis has rates of attendance at or below 25%.

Clearly Poland needs to start advertising, because Russia probably isn't where you want to go if you're on the hunt for a particularly religious environment, especially if your starting point is the US. Now, Poland's gonna have a more-specifically-Catholic environment, which I bet isn't what they are, but I bet that they aren't Russian Orthodox either, so...

[–] tal@lemmy.today 79 points 1 month ago* (last edited 1 month ago)

saying he’s “never felt so free and fulfilled.”

From another article, his role at the IRS is in writing tax law regulations for pension funds. I mean...it's gotta be done, but I kinda suspect that sometimes, it's not very exciting.

Honestly, I'm kinda impressed that he took his furlough and decided to knock something off his bucket list. I mean, if life gives you lemons, make lemonade and all that.

EDIT: Oh, they apparently also mentioned it in this article as well.

[–] tal@lemmy.today 29 points 1 month ago* (last edited 1 month ago)

I mean, it's a bunch of technical gobledygook from different fields in an Iranian journal dealing with holography claiming extraordinary results.

Reminds me of the Bogdanov affair.

[–] tal@lemmy.today 1 points 1 month ago (1 children)

But the software needs to catch up.

Honestly, there is a lot of potential room for substantial improvements.

  • Gaining the ability to identify edges of the model that are not-particularly-relevant relevant to the current problem and unloading them. That could bring down memory requirements a lot.

  • I don't think


though I haven't been following the area


that current models are optimized for being clustered. Hell, the software running them isn't either. There's some guy, Jeff Geerling, who was working on clustering Framework Desktops a couple months back, because they're a relatively-inexpensive way to get a ton of VRAM attached to parallel processing capability. You can have multiple instances of the software active on the hardware, and you can offload different layers to different APUs, but currently, it's basically running sequentially


no more than one APU is doing compute presently. I'm pretty sure that that's something that can be eliminated (if it hasn't already been). Then the problem


which he also discusses


is that you need to move a fair bit of data from APU to APU, so you want high-speed interconnects. Okay, so that's true, if what you want is to just run very models designed for very expensive, beefy hardware on a lot of clustered, inexpensive hardware...but you could also train models to optimize for this, like use a network of neural nets that have extremely-sparse interconnections between them, and denser connections internal to them. Each APU only runs one neural net.

  • I am sure that we are nowhere near being optimal just for the tasks that we're currently doing, even using the existing models.

  • It's probably possible to tie non-neural-net code in to produce very large increases in capability. To make up a simple example, LLMs are, as people have pointed out, not very good at giving answers to arithmetic questions. But...it should be perfectly viable to add a "math unit" that some of the nodes on the neural net interfaces with and train it to make use of that math unit. And suddenly, because you've just effectively built a CPU into the thing's brain, it becomes far better than any human at arithmetic...and potentially at things that makes use of that capability. There are lots of things that we have very good software for today. A human can use software for some of those things, through their fingers and eyes


not a very high rate of data interchange, but we can do it. There are people like Musk's Neuralink crowd that are trying to build computer-brain interfaces. But we can just build that software directly into the brain of a neural net, have the thing interface with it at the full bandwidth that the brain can operate at. If you build software to do image or audio processing in to help extract information that is likely "more useful" but expensive for a neural net to compute, they might get a whole lot more efficient.

[–] tal@lemmy.today 1 points 1 month ago (2 children)

There’s loads of hi-res ultra HD 4k porn available.

It's still gonna have compression artifacts. Like, the point of lossy compression having psychoacoustic and psychovisual models is to degrade the stuff as far as you can without it being noticeable. That doesn't impact you if you're viewing the content without transformation, but it does become a factor if you don't. Like, you're viewing something in a reduced colorspace with blocks and color shifts and stuff.

I can go dig up a couple of diffusion models finetuned off SDXL that generate images with visible JPEG artifacts, because they were trained on a corpus that included a lot of said material and didn't have some kind of preprocessing to deal with it.

I'm not saying that it's technically-impossible to build something that can learn to process and compensate for all that. I (unsuccessfully) spent some time, about 20 years back, on a personal project to add neural net postprocessing to reduce visibility of lossy compression artifacts, which is one part of how one might mitigate that. Just that it adds complexity to the problem to be solved.

[–] tal@lemmy.today 6 points 1 month ago

I doubt that OpenAI themselves will do so, but I am absolutely confident that someone not only will be banging on this, but I suspect that they probably have already. In fact, IIRC from an earlier discussion, someone already was selling sex dolls with said integration, and I doubt that they were including local parallel compute hardware for it.

kagis

I don't think that this is the one I remember, but doesn't really matter; I'm sure that there's a whole industry working on it.

https://www.scmp.com/tech/tech-trends/article/3298783/chinese-sex-doll-maker-sees-jump-2025-sales-ai-boosts-adult-toys-user-experience

Chinese sex doll maker sees jump in 2025 sales as AI boosts adult toys’ user experience

The LLM-powered dolls are expected to cost from US$100 to US$200 more than existing versions, which are currently sold between US$1,500 and US$2,000.

WMDoll – based in Zhongshan, a city in southern Guangdong province – embeds the company’s latest MetaBox series with an AI module, which is connected to cloud computing services hosted on data centres across various markets where the LLMs process the information from each toy.

According to the company, it has adopted several open-source LLMs, including Meta Platforms’ Llama AI models, which can be fine-tuned and deployed anywhere.

[–] tal@lemmy.today 1 points 1 month ago* (last edited 1 month ago) (4 children)

While I don't disagree with your overall point, I would point out that a lot of that material has been lossily-compressed to a degree that significantly-degrades quality. That doesn't make it unusable for training, but it does introduce a real complication, since your first task has to be being able to deal with compression artifacts in the content. Not to mention any post-processing, editing, and so forth.

One thing I've mentioned here


it was half tongue-in-cheek


is that it might be less-costly than trying to work only from that training corpus, to hire actors specifically to generate video to train an AI for any weak points you need. That lets you get raw, uncompressed data using high-fidelity instruments in an environment with controlled lighting, and you can do stuff like use LIDAR or multiple cameras to make reducing the scene to a 3D model simpler and more-reliable. The existing image and video generation models that people are running around with have a "2D mental model" of the world. Trying to bridge the gap towards having a 3D model is going to be another jump that will have to come to solve a lot of problems. The less hassle there is with having to deal with compression artifacts and such in getting to 3D models, probably the better.

[–] tal@lemmy.today 0 points 1 month ago* (last edited 1 month ago) (1 children)

So, I'm just talking about whether-or-not the end game is going to be local or remote compute. I'm not saying that one can't generate pornography locally, but asking whether people will do that, whether the norm will be to run generative AI software locally (the "personal computer" model that came to the fore in the mid-late 1970s and on or so) or remotely (the "mainframe" model, which mostly preceded it).

Yes, one can generate pornography locally....but what if the choice is between a low-resolution, static SDXL (well, or derived model) image or a service that leverages compute to get better images or something like real-time voice synth, recognition, dialogue, and video? I mean, people can get static pornography now in essentially unbounded quantities on the Internet; It is in immense quantity; if someone spent their entire lives going through it, they'd never, ever see even a tiny fraction of it. Much of it is of considerably greater fidelity than any material that would have been available in, say, the 1980s; certainly true for video. Yet...even in this environment of great abundance, there are people subscribing to commercial (traditional) pornography services, and getting hardware and services to leverage generative AI, even though there are barriers in time, money, and technical expertise to do so.

And I'd go even further, outside of erotica, and say that people do this for all manner of things. I was really impressed with Wolfenstein 3D when it came out. Yet...people today purchase far more powerful hardware to run 3D video games. You can go and get a computer that's being thrown out that can probably run dozens of simultaneous instances of Wolfenstein 3D concurrently...but virtually nobody does so, because there's demand for the new entertainment material that the new software and hardware permits for.

[–] tal@lemmy.today 37 points 1 month ago (2 children)

Legend has it that every new technology is first used for something related to sex or pornography. That seems to be the way of humankind.


Tim Berners-Lee, inventor of the World Wide Web, HTML, URLs, and HTTP.

view more: ‹ prev next ›