this post was submitted on 31 Dec 2025
316 points (97.9% liked)

No Stupid Questions

45251 readers
766 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

The Internet being mostly broken at this point is driving me a little insane, and I can't believe that people who have the power to keep a functioning search engine for themselves wouldn't go ahead and do it.

I wonder about this every time I see people(?) crowing about how amazing AI is. Like, is there some secret useful AI out there that plebs like me don't get to use? Because otherwise, huh?

top 50 comments
sorted by: hot top controversial new old
[–] CameronDev@programming.dev 182 points 1 week ago (1 children)

I dont think any of these tech execs (all execs?) use their products. They all have assistants to do everything for them, so they have no idea what this whole "internet" thing is, other than it makes them money.

[–] pelespirit@sh.itjust.works 44 points 1 week ago (4 children)

Oh they know how to get to the porn.

[–] TheFogan@programming.dev 71 points 1 week ago (2 children)

Pretty sure they don't bother with just photos, I'm sure there's some guy that's replaced the one that died in prison.

Pretty sure they don't bother with just photos, I'm sure there's some guy that's replaced the one that ~~died~~ was murdered in prison.

FTFY

[–] pewgar_seemsimandroid@lemmy.blahaj.zone 7 points 1 week ago (1 children)

is this talking about Epstein?

[–] A_Random_Idiot@lemmy.world 34 points 1 week ago (1 children)

mmhmm.

Once you get above a certain value in your bank account, laws stop applying to you unless you do something catastrophically stupid as to threaten the whole scheme, then you're hung out as a sacrificial lamb to protect the others.

I guarantee epsteins cartel never died. its headquarters just moved, and the leader changed. thats all.

[–] Duamerthrax@lemmy.world 21 points 1 week ago

The only times laws effect rich people are when they screw over other rich people. Elizabeth Holmes didn't get in trouble for selling incorrect blood tests to patients. She got in trouble for defrauding other rich people.

[–] cecilkorik@piefed.ca 29 points 1 week ago (1 children)

No, porn is for poor people. These people can (and do) easily afford the live action version.

[–] Tangent5280@lemmy.world 12 points 1 week ago

Thot-on-retainer

[–] wreckedcarzz@lemmy.world 21 points 1 week ago* (last edited 1 week ago)

Yeah, they just said that their assistants handle everything. Little bit of mood lighting, little bit of a new BMW, little bit of a cover up and hush money. All expensed to the company. Ahh, the good life.

[–] myster0n@feddit.nl 17 points 1 week ago (1 children)

They probably use exclusive paid sites. Maybe even ... very exclusive.

[–] 14th_cylon@lemmy.zip 20 points 1 week ago* (last edited 1 week ago) (2 children)

they probably use hookers? porn is for the poor people.

load more comments (2 replies)
[–] brucethemoose@lemmy.world 59 points 1 week ago* (last edited 1 week ago)

The LLM? Yes, actually, and it's not secret:

https://aistudio.google.com/

The "preview" version are often pretty good, before Google deep fries them with sycophantic RHLF. For example, Gemini 2.0 and 2.5 Pro both peaked in temporary experimental versions, before getting worse (and benchmark maxxed) in subsequent updates.

But if you really want unenshittified LLMs, look into open weights models like GLM. They're useful tools, and locally runnable. They are kind of a "secret useful AI out there that plebs don’t get to use" because of the software finickiness and hardware requirements to run it locally make it difficult.

On top of that, Google employees probably have access to "teacher" models and such that the public never gets to see.


For search? IDK. I'm less familiar on what Google does internally, but honestly, from what I've read, the higher ups are drinking kool-aid and assert all is fine.

[–] glitchdx@lemmy.world 52 points 1 week ago

The problem with conspiracy theories like this is it assumed that the people doing the conspiracy are competent.

[–] QuinnyCoded@sh.itjust.works 40 points 1 week ago (6 children)

another thing to mention, YOUTUBE. The search bar doesn't even do anything, it shows RECOMMENDATIONS instead of answers to the search.

Paying doesn't even stop that! It's actually maddening

[–] cheesybuddha@lemmy.world 11 points 1 week ago (1 children)

How do you use the search bar in youtube? I put in topics or keywords, which seem to work just fine.

Are you putting in whole questions? I'm not sure the search function is designed to worked like that

[–] MidsizedSedan@lemmy.world 10 points 1 week ago (1 children)

I saw a cool video here once. Typed the EXACT title to YouTube (including caps in the right words) and it didn't show up. Only the big channels around that topic.

Personal rant that is still related, but not needed

Hell, I play trackmania turbo. Its still getting new videos from the community. 3-5 vidoes a week. Look it up, and some of the first results are a non-turbo player uploading 1 video 4 years ago. But since that channel is getting millions of videos, THAT video is promoted, not the fans still playing now.

[–] GenosseFlosse@feddit.org 6 points 1 week ago

Unlisted videos don't show up in the search as far as I know.

[–] possumparty@lemmy.blahaj.zone 8 points 1 week ago (7 children)

type in before:2027 at the end of your search for a much more palatable experience

load more comments (4 replies)
[–] leftzero@lemmy.dbzer0.com 30 points 1 week ago (11 children)

No. They're drinking their own coolaid.

They've offloaded what little thinking they did to LLMs (not that LLMs can think, but in this case it makes no difference), and at this point would no longer be able to function if they had to think for themselves.

Don't think of them as human people with human needs.

They're mere parasites, all higher functions withered away through lack of use, now more than ever.

They could die and be replaced by their chatbots, and we wouldn't notice a difference.

[–] pugnaciousfarter@literature.cafe 12 points 1 week ago* (last edited 1 week ago) (1 children)

I don't think they are drinking their own cool aid.

Meta's Zuck and tiktok ceo don't let their kids on their respective short form content platforms because they know its harmful effects.

They are smart enough to know not to dip into their stash.

I think they definitely have their own version of it.

[–] OhNoMoreLemmy@lemmy.ml 10 points 1 week ago (1 children)

Nah, you can actually see some of them developing AI psychosis.

https://medium.com/write-a-catalyst/this-prominent-vc-investor-just-had-a-chatgpt-induced-psychosis-on-twitter-heres-what-this-means-197ae5df77f4

You've got to understand that most AI execs aren't technical people, they're hype men. And LLMs are weirdly good at hype and the illusion of technical correctness. So they don't have a problem with it. 

Sam Altman saying he uses chatgpt to tell him how to act with his baby is one of the things he's said I actually believe. Of course he's also a got a team of nannies he couldn't be bothered to mention, but the trust for chatgpt is there.

load more comments (1 replies)
load more comments (10 replies)
[–] Chozo@fedia.io 25 points 1 week ago (1 children)

Those assholes probably kept a working version of Inbox for themselves. 😡

load more comments (1 replies)
[–] Carnelian@lemmy.world 24 points 1 week ago (16 children)

every time I see people(?) crowing about how amazing AI is

You’re correct that there’s a massive flood of bots pushing it everywhere. But regardless of what the subject is, once someone has “bought in” to a scam they tend to stick with it and defend it no matter what. Because the alternative is admitting they were fooled, and that’s basically an uncrossable bridge for most folks.

People on their literal death beds were using their literal last words (before being intubated with covid) to threaten nurses not to go near them with “the jab”. So it really doesn’t surprise me that people continue using “AI” despite it being worse than worthless for literally everything

[–] seathru@quokk.au 10 points 1 week ago

“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.” ― Carl Sagan

load more comments (15 replies)
[–] HugeNerd@lemmy.ca 24 points 1 week ago

Do you think the CEO of Kraft feeds macaroni cheese dinner from a box to his kids?

[–] favoredponcho@lemmy.zip 22 points 1 week ago (7 children)

There is Kagi for the rest of us

load more comments (7 replies)
[–] Melvin_Ferd@lemmy.world 19 points 1 week ago* (last edited 1 week ago)

I for sure believe they do. The number of applications that launched with genuinely useful features, only to have them rolled back because of public backlash or shoved behind a paywall, has always pissed me off. Take Bing image search. When it first showed up, you could actually use it for OPSEC and for tracing the origin of suspect memes. If something felt astroturfed or a user seemed like a bot, I could verify it with Bing search. Now I can’t even use it to search a logo because they gutted it. There’s no way I believe that search capability doesn’t still exist behind closed doors, in the hands of political firms, law offices, or government agencies.

[–] pyrinix@kbin.melroy.org 17 points 1 week ago

I assume all high-level positions in tech companies are using better versions than what they shove out to the rest.

I mean, Microsoft treats Enterprise users with class with Windows 10 Enterprise. That version doesn't have nearly the amount of bloat that even Professional has. Hell, Enterprise doesn't even have that stupid online search function.

So it's like they KNOW they have greenlit some shitty ideas, but, they won't deal with it so why not just throw it all onto others to make their experiences miserable?

[–] Sanctus@anarchist.nexus 16 points 1 week ago (1 children)

Wake up, babe. New conspiracy theory just dropped.

I choose to believe this now.

[–] salacious_coaster@feddit.online 10 points 1 week ago (1 children)

Right? I know this is a conspiracy theory for which I have no evidence. It just makes too much sense to me to not believe it.

load more comments (1 replies)
[–] TropicalDingdong@lemmy.world 16 points 1 week ago* (last edited 1 week ago) (6 children)

This is really the fear we should all have. And I've wondered about this specifically in the case of Thiel, who seems quite off their rocker.

Some things we know.

Architectural, the underpinnings of LLM's existed long before the modern crops. Attention is all you need is basic reading these days; Google literally invented transformers, but failed to create the first llm. This is important.

Modern LLM's came through basically two aspects of scaling a transformer. First, massively scale the transformer. Second, massively scale the training dataset. This is what OpenAI did. What google missed was that the emergent properties of networks change with scale. But just scaling a large neural network alone isn't enough. You need enough data to allow it to converge on interesting and useful features.

On the first part, of scaling the network. This is basically what we've done so far, along with some cleverness around how training data is presented, to create improvements to existing generative models. Larger models, are basically better models. There is some nuance here but not much. There have been no new architecural improvements that have resulted in the kind of order of magnitude scaling in improvement we saw in the jump from lstm/GAN days, to transformers.

Now what we also know, is that its incredibly opaque what is actually presented to the public. Open source models, some are in the range of 100's of billions of parameters Most aren't that big. I have quen3-vl on my local machine, its 33 billion parameters. I think I've seen some 400b parameter models in the open source world, but I haven't bothered downloading them because I can't run them. We don't actually know how many billion parameters models like Opus-4.5 or whatever shit stack OpenAI is sending out these days. Its probably in the range of 200b-500b, which we can infer based on the upper limits of what can fit on the most advanced server grade hardware. Beyond that, its MoE, multiple models on multiple GPU's conferring results.

What we haven't seen is any kind of stepwise, order of magnitude improvement since the 3.5-4 jump open AI made a few years ago. Its been very.. iterative, which is to say, underwhelming, since 2023. Its very clear that an upper limit was reached and most of the improvements have been around QoL and nice engineering, but nothing has fundamentally or noticeably improved in terms of the underlying quality of these models. That is in and of itself interesting and there could be several explanations of this.

Getting very far beyond this takes us beyond the hardware limitations of even the most advanced manufacturing we currently have available to us. I think the most a blackwell card has is ~288GB of VRAM? Now it might be at this scale we just don't have hardware available to even try and look over the hedge to see what or how a larger model might perform. This is one explanation: we hit the memory limits of hardware and we might not see a major performance improvement until we get into the TB range of memory on GPU's.

Another explanation, could be that at the consumer level, they stopped throwing more compute resources at the problem. Remember the MoE thing? Well these companies, allegedly, are supposed to make money. Its possible that they just stopped throwing more resources at their product lines, and that more MoE does actually result in better performance.

In the first scenario I outlined, executives would be limited to the same useful, but kinda-crappy LLM's we all have access to. In the second scenario, executives might have access to super powered, high MoE versions.

If the second scenario is true and when highly clustered, llm's can demonstrate an additional stepwise performance improvement, then we're already fucked. But if this were the case, its not like western companies have a monopoly on GPUs or even models. And we're not seeing that kind of massive performance bump elsewhere, so its likely that MoE also has its limits and they've been reached at this point. Its also possible we've reached the limits of the training data. That even having consumed all of 400k's years of humanities output, and its still too dumb to draw a full glass of wine. I don't believe this, but it is possible.

[–] partial_accumen@lemmy.world 6 points 1 week ago (3 children)

Its also possible we’ve reached the limits of the training data.

This is my thinking too. I don't know how to solve the problem either because datasets created after about 2022 likely are polluted with LLM results baked in. With even a 95% precision that means 5% hallucination baked into the dataset. I can't imagine enough grounding is possible to mitigate that. As the years go forward the problem only gets worse because more LLM results will be fed back in as training data.

[–] TropicalDingdong@lemmy.world 6 points 1 week ago

I mean thats possible, but I'm not as worried about that. Yes it would make future models worse. But its also entirely plausible to just cultivate a better dataset. And even small datasets can be used to make models that are far better at specific tasks than an any generalist llm. If better data is better then the solution is simple: use human labor to cultivate a highly curated high quality dataset. I mean its what we've been doing for decades in ML.

I think the bigger issue is that transformers are incredibly inefficient about their use of data. How big of a corpus do you need to feed into an llm to get it to solve a y =mx+b problem? Compare that to a simple neural network or a random forest. For domain specific tasks they're absurdly inefficient. I do think we'll see architectural improvements, and while the consequences of improvements has been non-linear, the improvements themselves have been fairly, well, linear.

Before transformers we basically had GAN's and LSTM's as the latest and greatest. And before that UNET was the latest and greatest (and I still go back to, often), and before that basic NN's and random forest. I do think we'll get some stepwise improvements to machine learning, and we're about due for some. But its not going to be tittering at the edges. Its going to be something different.

The only thing that I'm truly worried about is that if, even if its unlikely, if you can just 10x the size of an existing transformer (say from 500 billion parameters to 5 trillion, something you would need like a terabyte of vram to even process), if that results in totally new characteristics, in the same way that scaling from 100 million parameters to 10 billion resulted in something that, apparently, understood the rules of language. There are real land mines out there that none of us as individuals have the ability to avoid. But the "poisoning" of the data? If history tells us anything, its that if a capitalist thinks it might be profitable, they'll throw any amount of human suffering at it to try and accomplish it.

load more comments (2 replies)
load more comments (5 replies)
[–] etchinghillside@reddthat.com 14 points 1 week ago

Microsoft employees are the beta testers of their software. I expect no different from Google.

[–] zaphod@sopuli.xyz 13 points 1 week ago

No, they just have their human assisstants as a filter to use the entshittified search.

[–] Blackmist@feddit.uk 12 points 1 week ago

Nah, all the SEO nobheads poisoned search well before Google managed it.

The internet has been inventing nonsense based seemingly on your search queries for a long-ass time.

[–] Hydrii@lemmy.blahaj.zone 12 points 1 week ago (1 children)

I fully believe in old.Google.com existing now

Old Google was the tits. Even five years ago Google would be hyper advanced technology today.

[–] untorquer@lemmy.world 11 points 1 week ago

Actors that are externally awful are ubiquitously internally awful. For example, consider every imperialist/colonial empire that has ever existed.

[–] TommySoda@lemmy.world 10 points 1 week ago

It doesn't take a lot of tech skill to be an exec for a tech company. My guess is they fall into the same category as those that see the AI overview on Google and think "wow this is so much better" and never second guess the results.

[–] burntbacon@discuss.tchncs.de 8 points 1 week ago

I think yes, and no. There are certainly in-house tools that the outside folks don't get. LLMs for sure have better tiers and loosened guardrails.

...buuuuut, the people at an 'executive' level also are entirely unlike you and me. They are simultaneously as gullible and foolish as the 'sheep' of society, who are also buying into the 'AI' hype of LLMs, and so far removed from our situation that even using an LLM or search engine is entirely outside of their experience. They aren't going to be using an LLM to plan out a vacation or a work schedule and have it fail any more than they would have looked through a SEO optimized bullshit website about vacuum cleaners (or super slideshow-ified list of 'top ten pacific vacations!' website to show you a bunch of ads) five years ago. They'll ask the LLM (/search engine and only look at the ai at top) for the best pacific vacations and then tell their assistant to plan a vacation for them based on a quick glance at the result (or the same for the vacuum cleaner to replace the one that broke when their house cleaner was trying to get the super long hair from the super fru-fru breed that's only allowed in two rooms in the house out of the super luxurious thick rug).

The way they use the LLM is perfectly fine for them. They aren't going to see any negatives from it, so the in-house or publicly available versions aren't really the reason for their ability to 'crow' about it. Same for the general downtrend of the internet. Their use case fucking sucks, and it isn't affected.

[–] bradorsomething@ttrpg.network 7 points 1 week ago

There’s something important to understand about LLMs. You need to imagine them as a crowd of 1,000 people, who you use an algorithm on, to get close to the most popular opinion on the answer.

[–] cerebralhawks@lemmy.dbzer0.com 6 points 1 week ago (1 children)

Apple execs use iPhones with a modified iOS that lets them use a keyboard that doesn't suck and doesn't have sponsored auto-corrections, and I will die on this hill. /s

Honestly though, I think a lot of people at Microsoft just use Macs. Google? I have no idea. I would imagine the smarter ones search with DuckDuckGo or something like that. Apple? They probably just use their own stuff since they look down their noses at the competition, but they use the best. Co-founder Steve Wozniak basically confirmed that. Even though he no longer works there, he said they send him the best iPhone every year. (I'm not sure if they still do that.) He also said it's pretty but he wishes it did half the shit his Android phone does. He also uses custom firmware on Android, so he's not just using a stock Android phone, he's running something like GrapheneOS or LineageOS. So Woz is basically just collecting them at this point. Maybe he donates them. I dunno. So yeah, Tim Cook has a Mac on his desk, but it's not the $480 M4 Mac mini, it's a fully spec'd Mac Studio that would probably set you back close to 10 grand. Because why would he use anything less? And he's carrying the top spec'd MacBook Pro. And a 1TB iPhone Pro... he doesn't seem like a big guy, so he might just be using the "regular" Pro and not the Pro Max, as a personal preference. Honestly, the executives can use whatever they want, but they'll only be seen with the flagship.

[–] chillpanzee@lemmy.ml 6 points 1 week ago

I worked for a FAANG company for a lotta years. We always had the fully spec'd laptop/desktop/phone. Its not reserved for just the C-suite. The tech is cheap compared to almost every other aspect of employing expensive labor. Hell, the food we got every day probably cost the company well more than the tech.

load more comments
view more: next ›