this post was submitted on 18 Mar 2026
-30 points (30.3% liked)

No Stupid Questions

47213 readers
1509 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.

However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I'm playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.

I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it's a bit hypocritical for people to get upset about AI "stealing" work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could "steal" the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.

top 50 comments
sorted by: hot top controversial new old
[–] SuspciousCarrot78@lemmy.world 2 points 3 hours ago* (last edited 2 hours ago) (1 children)

I don't think people hate AI per se - they hate big tech, and what big tech is doing with it. That's a legitimate gripe, but it's not the same thing as the technology being bad.

AI used well can be genuinely useful. I've dropped a couple of examples in other threads I won't rehash here, but the short version is: there are real world uses for this tech (world modelling, medicine, robotics).

Remember, the 2024 Nobel Prize in Chemistry went to the team behind AlphaFold. That's the AI that cracked protein folding, a problem biology had been stuck on for 50 years. It's already being used by over two million researchers to accelerate drug discovery for cancer, Alzheimer's, and antibiotic resistance.

Hell, my dumb ass built a clinical notes pipeline that takes the tedium of charting from 15-20 mins down to about 3, with a policy gate that rejects LLM output before it ever reaches me if it fails criteria I defined. None that looks anything like the slop-firehose corporate rollout most people are reacting to.

Worth noting too: taking a black-and-white position on anything is just less cognitively expensive than arriving at a nuanced one. That's not a character flaw, that's called "being human". But that doesn't mean the nuanced position is wrong.

PS: The electricity/water data centre stuff is maybe more complicated than the headline takes suggest. This might be worth actually reading before treating it as settled.

https://blog.andymasley.com/p/a-cheat-sheet-for-conversations-about

YMMV and ICBW

[–] rabiezaater@piefed.social 2 points 2 hours ago (1 children)

Good resource there on energy consumption, thanks for sharing. I had heard some things about the energy use being over stated, or over focused on, but that is a very comprehensive outline of exactly the overall impact.

[–] SuspciousCarrot78@lemmy.world 1 points 2 hours ago

Hope it helped.

[–] hesh@quokk.au 41 points 1 day ago* (last edited 23 hours ago) (11 children)

Kills the planet

Steals from artists

Widens inequality

Puts people out of work

Reinforces prejudices

Makes us stupid

Makes everything generic

Blows up the economy

Supports oligarchs

Can't be trusted, hallucinates and lies

Overhyped & overpromised

Can't generate outside of its training data

Is creating obscene surveillance state

Used in weapons to kill

Made computer components expensive

Ruined the internet with slop

Replaces human interaction

Just annoying

[–] peepeepoopoo@hilariouschaos.com 2 points 22 hours ago* (last edited 22 hours ago) (1 children)

As a thought experiment I considered all of these points and here are my thoughts.

Kills the planet

Got me there. That stupid datacenter crap where they need 1000tb of ram and zillion 5090RTXs and an entire nuclear power plant just to generate a chocolate chip cookie recipe needs to fucking go. Self-hosted ai isn't that bad though. You can still argue that running a self-hosted koboldcpp on a 10 watt raspberry pi ALSO destroys the planet but so does all technology. Imagine living with no A/C, no deodorant, no running water, no toilet paper, just to make the earth livable for an additional 100 years or whatever. Fuck that. I chose to not have kids so I'm still doing my part which is more than what the majority of the population can be arsed to do.

Steals from artists

I don't really understand this argument despite being the most common anti-ai argument. What type of art is ai really capable of replacing humans on? Hentai and video game 3d model textures? It's useless at making 3d models even to the most fanatic of ai worshippers. I can watch porn on pornhub for free and would never and have never commissioned a human artist to make porn pictures for me. Am I stealing from hentai artists by not commissioning them for their work and choosing other means of looking at boobs?

Buying textures for your hand made 3d models only supports the corporation selling them and the original artists get very little if anything at all. Using ai to circumvent spammy price gouging for 3d model textures seems like a better way to fight back to me. Another point is that copyright trolls are always harassing random youtubers over bullshit claims which DOES destroy livelihoods. Using ai to create a unique illustration that isn't registered in a copyright strike database when you REALLY weren't going to pay a $20 license for some spammy corporate licensed art either way really seems like a legitimate use of ai to me.

Another thing is memes even. I would 100%, absolutely, positively, never ever in a million years commission a human artist for the hundreds of dollars it usually costs to make an illustration for a meme in a shitpost I was trying to make. Yet people get out their torches and pitchforks anytime someone uses ai in a shitpost. I just don't get it. It's the "pirating software STEALS money from developers" argument all over again. Is it REALLY stealing if you WEREN'T going to pay for whatever it was otherwise? In 2018 the average person online was practically up in arms over how unfair copyright law is and everyone dropped it to hate ai instead. Seems a little too convenient is you ask me. I think a lot of people have been played.

Widens inequality

Employers using ai to screen out the applicants that aren't desperate enough and are therefore less likely to submit to abnormally cruel or illegal terms could be an example of this. Employers in America generally have too many freedoms in the first place. We aren't going to get out this downward spiral of wages not keeping up with costs of living without doing some stuff that would be really unpopular to all the powerful people in charge of it all. I'm not sure that they need ai to continue colluding together to treat us all like trash. It will eventually devolve into all-out violence if no one forces them to stop ai or not.

Also facial recognition cameras, more about that further down.

Puts people out of work

I don't have any good supporting or opposing arguments for this one because I don't know of any strong examples. Ai is 1000% shittier than a human at any given task for 0% of the cost which is enough to keep an american corporation satisfied for most purposes at least in theory.

Reinforces prejudices

I'm not going to be like "provide examples or it doesnt count" because it's lame and stupid when people do that but my best guess for this one is its talking about how ai can be used to reinforce white nationalist ideology online in bot swarms and stuff. An ai can generate pro christo-fascist propaganda just as much as it can generate pro-democracy propanganda. I wish we could harass christian nationalist type people online with ai but it seems to be only the bad guys doing it. Go on reddit and say anything positive about marijuana in any context besides "my grandma is dying of cancer and marijuana allows her to not be in pain". You will have people telling you to grow up and stop being a piece of shit. Meanwhile, you can speak out in support of bombing poor people in the middle east and no one bats an eye. Why can't we harass the piece of shit people with ai? I guess you got me on this one. It only is used for spreading christian nationalist ideology for some reason. But this COULD change.

Makes us stupid

A few days ago I used a self hosted ai to help write a python script to run object recognition on the cctv cameras for my home network and it only took an afternoon. It would have taken longer to do this if I truly had to figure out and research every little detail and function name myself but I still could have done it. Sure there was some incorrect stuff in it but fixing that was still faster than doing it from scratch. I used the time I saved to also program a graph that shows the temperature history on my weather station. Does this mean I am stupid?

Makes everything generic

100% true. In 2014 or so, you could find anything you wanted on the internet. Now every single webpage is one big nothing-burger. Would corporate enshitification alone have brought things to this point even without ai? Maybe so, maybe not. The point stands.

Blows up the economy

It definitely provides a coverup excuse for the systematic price gouging of essential microchips and computer components, sure.

Supports oligarchs

This is true. Using non self-hosted ai even without paying for it does support oligarchs. Look at Grok for example. It's a blatant fascist ideology propaganda machine. The other bots probably do the same thing but more subtle. I bet if you asked chatgpt about marijuana, transgender rights or atheism it wouldn't be supportive of it. Yet if you asked chatgpt to run an online bot harassment campaign to tell transgender people and marijuana users how big of a piece of shit they are, there would be little pushback and it would say things of suspiciously higher quality than it was the other way around. They'd probably quietly and temporarily switch it over to the paid model for that one to make it generate higher quality hate speech without charging you for it. I'm not going to try it though.

Can’t be trusted, hallucinates and lies

Sure. You can't trust posts on the internet either. Sometimes I find it easier to do my research and differentiate between bad advice and not bad advice than it is to just start from nothing, but most of the seriously potentially useful stuff is usually banned from ai models anyway.

Overhyped & overpromised

I guess. See "Puts people out of work". 1000% worse for 0% of the cost is a no-brainer to an american corporation. To cut down on backlash they probably have to pretend replacing customer support roles with bots is "actually better".

Can’t generate outside of its training data

Some self hosted ai models are compatible with being connected to a websearch which means all the non-self hosted ones also have that. Then you have ai shifting through ai slop articles trying to guess which information is useful and which isn't. The thought of making an ai sift through another ai bot's poop is funny to me.

Is creating obscene surveillance state

This is the objectively worst part about the advent of ai. Ai powered facial recognition allows law enforcement to have an easier time tracking down and harassing the types of people that the dominant ideology (the christian nationalists) want removed from society. The fascists established a full-on 1984 and we fuckin' let them. For this one reason alone, I believe the world would be better of if ai were never a thing.

Used in weapons to kill

Violence wasn't invented until the first gun was invented after all. Not really. Maybe when the next american civil war happens, the good guys can have ai guided rockets or whatever too.

Made computer components expensive

I already elaborated on this, but yes. Spamming ai datacenters all over the place just to prevent houses from being built there to keep costs of living high means they have to fill them with overpriced video cards. To give credit where credit is due, this isn't all on ai. Chip companies are purposefully scaling back production so they can make more money while doing less work. Meanwhile, the government is massively cutting back on medicaid because they think we are all worthless losers who don't work hard enough and deserve to either die in prison over unplayable medical debt or live through suffering because there is lots of suffering in the bible and republicans want to make America more like the bible. It is an unreasonably cruel, unreasonably unfair double standard.

Replaces human interaction

I guess. Imagine getting swatted because you told your ai "friend" you were considering fleeing to a blue state and getting an abortion. Although religious fucknuts report their friends over this too.

Just annoying

If you get on any ai and give it a prompt like: generate a sensationalist shitpost of a news article titled "Why you should sell all your possessions and work 120 hours a week at your job instead and never take vacation because you deserve to live like that". The result is just an average modern news article.

load more comments (1 replies)
load more comments (10 replies)
[–] TheAlbatross@lemmy.blahaj.zone 51 points 1 day ago* (last edited 1 day ago) (22 children)

In my experience, people who use LLMs as educational tools... don't actually learn very well. They think they are, but they don't retain the knowledge nor do they seem able to infer from or apply the knowledge very well. There are even some early studies that are showing that using LLMs decreases cognitive ability, and considering how many kids and young people are using it to get their way though school and even higher education... I think we're using AI to raise a generation of stunted minds. That's going to be a bigger issue as time goes on and with the state of the world and who owns the LLMs... it looks like a grim, sad future thanks to this tech.

load more comments (22 replies)
[–] CallMeAl@piefed.zip 22 points 1 day ago (9 children)

Reading through this thread and your responses gives the strong impression that you just want to argue while at the same time aren't very well informed on the matter. Where you do respond its mostly whataboutism rather than actually addressing the comment you are responding to.

Your post asks "Why do people hate AI?" and then goes on to validates many of the commonly heard reasons people have for hating AI. You end with a suggestion that if we could develop AI into something else in the future, it might be good.

So it seems you already understand why people hate AI and are promoting an agenda rather than asking a genuine question.

load more comments (9 replies)
[–] BlindFrog@lemmy.world 3 points 21 hours ago (1 children)

Chatgpt, list all instances where OP is trying to subvert people's points with logical fallacies, & burn a couple hundred extra Wh while you're at it, thanks. I'm sure it'd take less energy for me to do it, but nah

This book is probably more worth ur time than this post: https://ia801605.us.archive.org/29/items/aiboba/aiboba.pdf It's An Illustrated Book of Bad Arguments by Ali Almossawi

[–] rabiezaater@piefed.social 1 points 14 hours ago* (last edited 14 hours ago) (1 children)

I love when people dismiss your argument without actually addressing it in any way, instead choosing to focus on pedantic logical fallacy classifications in a theoretical and non-specific way that does not actually explain what fallacies you have executed, and where. Good stuff, really convinced me or your side of the argument.

[–] manuremy@sopuli.xyz 27 points 1 day ago* (last edited 12 hours ago)

I loathe AI for multiple, personal reasons;

  1. When I need to contact customer support of some sort, there is an AI bot that is no use and there are no real humans, because the AI is cheaper. I won't get the help I need or it's too difficult to reach.

  2. My mother language is a bit more difficult one and many stores (especially online) are starting to translate everything with AI and that makes the text absolutely incomprehensible. Hard or even impossible to understand even the basic descriptions or the manuals.

  3. Browsers have those forced AI-summaries when you try to look for something and those are often both wrong and impossible to turn off. (Or if it's possible to turn off, they keep turning back on.)

  4. People I know are literally believing everything from those summaries and such and are very confidently wrong/misunderstanding whatever basic thing. It's very annoying. ("Let's ask STSÄTKEEPEETEE!")

  5. Being parasocial online is becoming frustrating as I have been accused of being an AI bot on multiple occasions just because of the way I write in English. Knowing even basic grammar makes you a bot these days.

[–] chunes@lemmy.world 2 points 21 hours ago

Mostly anti-intellectualism and ego, as far as I can tell. Also, conflating someone's business practices with a technology.

[–] givesomefucks@lemmy.world 23 points 1 day ago

It's burning the environment down, destroying the shambles of the global economy, and being constantly shoved down everyone's throats even though it's only impressive to people who don't understand it

[–] Oka@sopuli.xyz 20 points 1 day ago* (last edited 1 day ago) (11 children)

Imagine a shitty robot was just made available for free.

The shitty robot replaces you at work. It performs way faster with worse results, but the company hires a robot "expert" that fixes the results just enough that the product appears to be working. (Its not). You are now starving.

The shitty robot tells your kids that porn is a viable career path. And that they should kill themselves.

The shitty robot starts showing up everywhere, in advertising, TV shows, customer support lines, schools.

The shitty robot makes shitty art really fast, which people can sell or use how they want. Artists are now starving.

Imagine the shitty robot is now interviewing you for your next job.

load more comments (11 replies)
load more comments
view more: next ›