this post was submitted on 18 Mar 2026
-29 points (31.6% liked)

No Stupid Questions

47213 readers
2069 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.

However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I'm playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.

I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it's a bit hypocritical for people to get upset about AI "stealing" work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could "steal" the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.

(page 3) 38 comments
sorted by: hot top controversial new old
[–] dsilverz@calckey.world 2 points 1 day ago

@rabiezaater@piefed.social @nostupidquestions@lemmy.world

generating ideas

LLMs don't generate ideas, stricto sensu. They do, and I find it useful for esoteric (gnosis through chaos magick) purposes, output names and words unbeknownst to the user (this is how I, as an ESL person, learned some words I didn't know before).

But if we consider hard determinism, do we as biological automatons, though?

learn to code

As someone who codes since my childhood, I wouldn't suggest relying on LLMs for that. They could be used to output a descriptive text about some function or library, but you must know LLMs are statistical machines, the output text is a chain of "which token is the most probable next?", an auto-completing only slightly "better" than, say, Gboard's auto-complete. They "hallucinate" precisely because they rely on statistics and randomness.

Again: extremely useful as an "Ouija board", not very useful for blindly relying for learning something, definitely not reliable for "vibe coding".

Wanna learn how to code? Do the Elliot Alderson (Mr. Robot TV series) approach: find an existing "Hello world" project/source-code, tinker with it, change things here and there, try to compile/run, Google the exception that the compiler/interpreter thrown at you, change more things, break things, then fix the things you broke... This is exactly how I did. Let go of any hurry and you'll likely going to master it eventually.

d&d [...] I need a character [...] it makes it up quick

Yes, this is one of the use cases where LLMs can thrive, as a dice with hundreds of billions of sides.

You may want to roll real dices, convert the number into the respective letter (A=1,B=2,...) then append it as a source of real entropy, because the randomness you get from LLMs is likely to be pseudorandom.

Ideally, you'd tune (using a RTL-SDR) to a blank radio frequency and digitize the (true noise) spectrum into ASCII, and voila: free randomness, straight from the Cosmic Womb to your computer!

get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume

Totally agree with you in this regard. Throughout the history, humans relied on other humans' "ideas". Most of the novelty stemmed from "what if I were to take this flamey thing that consumed the tree I used to sit on, and put it under this food?", mashing up existing things. If we really were to appeal, evolution is that, merging two genetic sequences in an approximate manner while trying to replicate, still I don't see humans accusing newborn of "stealing genetic work from their ancestors".

definitely useful in a lot of ways, [..] if [...] developed on a more localized and decentralized scale

I totally agree in this regard, too.

To answer the main question: IMHO, people hate AI because it has been pushed and used by corps to further enshittify this world. I'm not Anti-AI, but I'm not pro-AI either. There can be nuance from both.

[–] toebert@piefed.social 2 points 1 day ago

AI is great, LLMs are a waste. This has been the case for years before LLMs.

LLMs which the current hype calls AI are the equivalent of a scammy car salesman. To your example of have AI teach you to code - AI is awful at coding. It produces code that is the average of a junior developer's output. It will look awesome from the outside because it will often mostly work at first, but in reality it's going to be an unmaintainable mess. An experienced engineer could use one and produce a good outcome, in some cases may be faster than without and in others slower - but the experienced engineer requirement is a must. What this means is your AI teacher by itself is a junior engineer, whose output wouldn't be trusted by themselves. That's the level you'll reach and may even learn and pick up terrible habits that'll set you back.

It will do all that and consume a ridiculous amount of resources for it compared to following a YouTube course.

I imagine a similar case is true for most industries, people who work in the industry see the absolute garbage coming out of it in large quantities and have to listen to people from the outside who don't know what good looks like in that context keep saying "oh you are now redundant cuz look how good ai is".

Meanwhile, it is trained on data stolen from the people who are now losing their jobs because the idiotic decision makers are on the side of believing how good the output looks like. AND there is more, it's doing it wasting a massive amount of resources, which drives up the prices for everyone (think all electrical devices needing computers, electricity prices). But what what money are they using for it? Oh yes! The money generated out of thin air by the corporations generating this massive AI bubble, which is most likely going to end with a crash that will decimate the market (and therefore the investments and pensions of people). And if the past is any indication, the government will prop the companies up with tax money - so people will pay for it twice.

[–] TranquilTurbulence@lemmy.zip 1 points 1 day ago (3 children)

Judging by the comments, I would say that most Lemmy users are aware of the downsides of LLMs. The average GPT user probably hasn't heard of half the points mentioned in these comments.
Judging by the downvotes, I would say that many Lemmy users are also very passionate about it. The average GPT user might think of LLMs like any other tool.

Unfortunately, I get the feeling that Lemmy isn't a suitable place for having a serious conversation about AI in general (not just LLMs). I would love to have that conversation, but this just isn't the place for it, as you can see. The people here seem to be too focused on LLMs, how they're developed and how they're forcibly implemented in places where they provide zero value etc. AI in general is such a broad category, and this kind of biased conversation misses 90% of it.

When you say AI, people hear LLM, and that's a genuine problem. When people say they hate AI, they probably aren't thinking of things like image search, optical character recognition, automatic categorization of the events of your bank account, signal processing in audio and video, image upscaling, frame generation, design of 3D structures, route planning etc. There's so much you can do with AI, but Lemmy users rarely mention those.

load more comments (3 replies)
[–] schnurrito@discuss.tchncs.de 1 points 1 day ago (2 children)

I don't, not in general.

There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn't have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.

But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.

I don't agree with talking points like "AI companies should be required to pay copyright holders of their training data" or "AI is bad because of the environmental impact" or "AI is bad because of RAM prices" or "AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users' suicide)" or such things; I think all of these are nonsense.

I believe in general that AI gets too much attention in the media. It's really not that impactful.

[–] cheese_greater@lemmy.world 2 points 1 day ago (2 children)

There has to be a liabillity standard tho, otherwise it completely does away with any possibillity of even nominal accountabillity. If harm is caused because of a human, there is liabillity (whether directly or to whoever is responsible for that persons actions). The same should be true for whoever employs LLM for some purpose that results in harm. The LLM cant be jailed or "shutdown" really, its incumbent upon the handler to assume liabillity for the activities they are involved with

load more comments (2 replies)
[–] rabiezaater@piefed.social 1 points 1 day ago

Glad to see some sanity for once on here. It's definitely not all good, but it's not all bad either, and when people attribute all the evils of the world to it, they are being disingenuous.

load more comments
view more: ‹ prev next ›