this post was submitted on 06 Sep 2025
31 points (87.8% liked)

No Stupid Questions

43508 readers
919 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

How an artificial intelligence (as in large language model based generative AI) could be better for information access and retrieval than an encyclopedia with a clean classification model and a search engine?

If we add a step of processing -- where a genAI "digests" perfectly structured data and tries, as bad as it can, to regurgitate things it doesn't understand -- aren't we just adding noise?

I'm talking about the specific use-case of "draw me a picture explaining how a pressure regulator works", or "can you explain to me how to code a recursive pattern matching algorithm, please".

I also understand how it can help people who do not want or cannot make the effort to learn an encyclopedia's classification plan, or how a search engine's syntax work.

But on a fundamental level, aren't we just adding an incontrolable step of noise injection in a decent time-tested information flow?

top 28 comments
sorted by: hot top controversial new old
[–] Feyd@programming.dev 28 points 1 week ago

But on a fundamental level, aren’t we just adding an incontrolable step of noise injection in a decent time-tested information flow?

Yes.

[–] e0qdk@reddthat.com 16 points 1 week ago (1 children)

If it actually worked reliably enough, it would be like having a dedicated, knowledgeable, and infinitely patient tutor that you can ask questions to and interactively explore a subject with who can adapt their explanations specifically to your way of thinking. i.e. it would understand not just the subject matter but also you. That would help facilitate knowledge transfer and could reduce the tedium of trying to make sense of something that's not explained well enough for you to understand (as written) with your current background knowledge but which you are capable of understanding.

[–] Doomsider@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Looks like we just found our next head of the Department of Education!

Now we just got a tweak Grok a little and our children will be ready for the first lesson of their new AI education checks notes Was the Holocaust real or just a woke story?

[–] Perspectivist@feddit.uk 10 points 1 week ago (2 children)

Looking at my ChatGPT "random questions" tab and the things I've asked from it, much of it are the kind of things you probably couldn't look up on encyclopedia.

For example:

"Is a slight drop in the engine rpm when shifting from neutral to 1st gear while holding down the clutch pedal a sign of worn out clutch"?

Or:

"What's the difference between Mirka's red and yellow sandpaper?"

[–] XeroxCool@lemmy.world 3 points 1 week ago

Hopefully, it told you that's not a sign of a worn clutch. Assuming no computer interference and purely mechanical effects, then that's a sign the clutch is dragging. A worn clutch would provide more of an air gap with the pedal depressed than a fresh clutch. If you want to see a partial list of potential causes, see my reply to the other comment that replied to you.

Your questions are still not proof that LLMs are filling some void. If you think of a traditional encyclopedia, of course it's not going to know what the colors of one manufacturer's sandpapers mean. I'm sure that's answered somehow on their website or wherever you came across the two colors in the same grit and format. Chances are, if one is more expensive and doesn't have a defined difference in abrasive material, the pricier one is going to last longer by way of having stronger backing paper, better abrasive adhesive, and better resistance to clogging. Whether or not the price is necessary for your project is a different story. ChatGPT is reading the same info available to you. But if you don't understand the facts presented on the package, then how can you trust the LLM to tokenize it correctly to you?

Similarly, a traditional encyclopedia isn't going to have a direct answer to your clutch question, but, if it has thorough mechanical entries (with automotive specifics), you might be able to piece it together. You'd learn the "engine" spins in unison up to the flywheel, the flywheel is the mating surface for the clutch, the clutch pedal disengages the clutch from the flywheel, and that holding the pedal down for 5+ seconds should make the transmission input components spin down to a stop (even in neutral). You're trusting the LLM here to have a proper understanding of those linked mechanical devices. It doesn't. It's aggregating internet sources, buzzfeed style, and presenting anything it finds in a corrupted stream of tokens. Again, if you're not brought up to speed on how those components interact, then how do you know what it's saying is correct?

Obviously, the rebuttal is how can you trust anyone's answer if you're not already knowledgeable? Peer review is great for forums/social sites/wikipedias in the way of people correcting other comments. But beyond that, for formal informational sites, vetting places as a source - a skill being actively eroded with Google or ChatGPT "giving" answers. Neither are actually answering your questions. They're regurgitating things they found elsewhere. Remember, Google was happy to take reddit answers as fact and tell you elmers glue will hold cheese to pizza and cockroaches live in cocks. If you saw those answers with their high upvote count, you'd understand the nuance that reddit loves shitty sarcastic answers for entertainment value. LLMs don't because they, literally, don't understand anything. It's up to you to figure out if you should trust an algorithm-promoted Facebook page called "car hacks and facts" filled with bullshit videos. It's up to you to figure out if everythingcar. com is untrustworthy because it has vague, expansive wording and has more ad space than information.

[–] bright_side_@lemmy.world 2 points 1 week ago (2 children)

Now I want to know the answer to the clutch question ☺️

[–] Perspectivist@feddit.uk 3 points 1 week ago

It's normal. Worn out clutch has different and more noticeable symptoms.

[–] XeroxCool@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

It's not. A worn clutch is losing its ability to connect the engine to the transmission. With the pedal depressed, the clutch should not be touching the engine [flywheel] at all. So a worn clutch would provide slightly more of an air gap between the engine and the transmission. So to answer OP's question, assuming there's no computer programming involved with the drop and it's a purely mechanical effect, then the clutch is dragging. There's many possibilities, including misadjusted clutch mechanisms (cable/plunger nut, pedal free play screw), worn clutch mechanisms (bent clutch fork, leaking fluid/worn cable sheath/stretched cable, broken pedal mount, bent levers), or a jam (extra carpet under the pedal, debris in transmission lever) to new several possibilities.

I had both a worn clutch and a dragging clutch in my Geo at different points. The only result of a worn clutch is having the engine rev up faster than the trucklet was accelerating, as if it was a loosey goose automatic. No shifting issues. When the cable was out of adjustment, it wasn't disengaging properly. It happened while driving and made it very difficult to drive since I came to a stop. I had to ride the poor synchro to get it up in speed to, essentially, clutchless shift into 1st. 3 blocks later, I forced it in just in time to climb my driveway.

But, to a much less dramatic experience, often enough, the aftermarket floormat would slip under the pedal and just slightly limit the clutch pedal travel to an effect more like the parent comment's experience. It go into gear with a little crunch and a little shudder and a little engine drop.

Side note, it's normal for letting the clutch out in neutral and having the engine drop a little. If the clutch pedal is up, the engine will be driving multiple input components - they just won't be further connected to the output components. It takes a little energy to spin those back up to 700rpm. They should spin down after a few seconds. If 5-10 seconds pass with the pedal depressed and the gears still resist then comply being engaged with the shifter, they aren't slowing down. That'd be another symptom/diag point for OP to test for a dragging clutch. A caveat is that if there's zero input and output speed on the transmission, the dogs may not be lined up and will still prevent engagement. It takes a few tries to confirm "sometimes won't engage" vs "really will not engage"

[–] SolOrion@sh.itjust.works 10 points 1 week ago* (last edited 1 week ago) (2 children)

Well, the primary thing is that you can ask extremely specific questions and get tailored responses.

That's the best use case for LLMs, imo. It's less of a replacement for a traditional encyclopedia- though people use it like that also- and more of a replacement for googling your question and getting a Reddit thread where someone explains.

The issue comes when people take everything it spits out as gospel, and do zero fact checking on it- basically the way that they hallucinate is the problem I have with it.

If there's a chance it's going to just flatly make things up, invent statistics, or just be entirely wrong.. I'd rather just use a normal forum and ask a real person that probably has a clue whatever question I have. Or try to find where someone has already asked that question and got an answer.

[–] NigelFrobisher@aussie.zone 5 points 1 week ago (1 children)

If you have to go and fact check the results anyway, is there even a point? At work now I’m getting entirely AI generated pull requests with AI generated descriptions, and when I challenge the dev on why they went with particular choices they can’t explain or back them up.

[–] SolOrion@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago)

That's why I don't really use them myself. I'm not willing to spread misinformation just because ChatGPT told me it was true, but I also have no interest in going back over every response and double checking that it's not just making shit up.

[–] Hotzilla@sopuli.xyz 4 points 1 week ago* (last edited 1 week ago)

Google is so shit nowadays, it's main purpose is to sell you things, not to actually retrieve the things you ask.

Mainly you see this with coding related questions, they were much better 5 years ago. Now only way to get results is to ask LLM and hope it doesn't hallusinate some library that doesn't exist.

Part of the issue is that SEO got better and google stopped changing things to avoid SEO manipulation.

[–] Valmond@lemmy.world 6 points 1 week ago

AI today in LLMs are quite juvenile and quite bonkers in image generation. They will probably get better like all information technology do (anyone remember the mobile phone? It went from popular, bad and expensive to a 20€ perfectly working door-stop).

So to answer your question, imagine an AI functioning like a personal teacher, so that when you see that pressure regulator valve, you can ask for why it works, what happens if the gas is not isotropic? How's the reaction to pressure changes? Where is it used? Show me a simulation of it in a real world situation. Calculate which one to get ad this specific replacement. Can you 3D print one? Why was it used on steam engines, or was it? Thousands of informations that won't fit on one page, that can be explained to you at your level too, if the teacher is smart enough.

I mean, that could be quite neat IMO.

[–] floo@retrolemmy.com 6 points 1 week ago

It literally cannot

[–] hera@feddit.uk 4 points 1 week ago

One of the ways I've found it to be useful so far that it can contextualise knowledge for you.

[–] Flax_vert@feddit.uk 3 points 1 week ago

LLMs are nice for basic research or explaining stuff in your terms. Kind of like an interactive encyclopedia. This does sacrifice accuracy, though

[–] 211@sopuli.xyz 1 points 1 week ago

To me the value has come mostly from "ok, so it sounds to me you are saying that..." and the ability to confirm that I haven't misunderstood something (of course with current LLMs both the original answer and the verification have to be taken with a heaping of salt). And the ability to adapt it on the go to a concrete example. So, kind of like a having a teacher or an expert friend, and not just search engine.

Like the last time I relied heavily on a LLM to help/teach me with something it was to explain the PC boot process and BIOS/UEFI to me, and how it applied step by step on how successfully deal with USB and bootloader issues on an "eccentric" HP laptop when installing Linux. The combination of explaining and doing and answering questions was way better than an encyclopedia. No doubt it could have been done with blog posts and textbooks, and I did have to make "educated guesses" on occasion, but all in all it was a great experience.

[–] Alsjemenou@lemy.nl 1 points 1 week ago* (last edited 1 week ago)

The problem will always be that you have to use an llm to ask questions in natural language. Which means it gets training data from outside whatever database you're trying to get information from. There isn't enough training data in an encyclopedia to make an llm.

So it can't be better because if it doesn't find anything it will still respond to your questions in a way that makes it seem it did what you ask. It just isn't as reliable as you yourself checking and going through the data. It can make you faster and find connections you wouldn't make yourself easily. But you can just never trust it as you can trust an encyclopedia.

[–] Hackworth@sh.itjust.works 1 points 1 week ago

In much the same way people think of digital storage as external memory, I think of generative A.I. as external imagination. Of course, human memory doesn't work like a hard drive, and LLMs don't work like our imaginations. But as a guiding metaphor, it seems to work well for identifying good/bad use cases.

If the world were made of pudding