People have different opinions on AI, not everyone is vehemently opposed, and some view it as useful if used on the appropriate configuration.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
Honestly, the problem when talking about "AI" is how many different things that can mean.
- General AI chats
- Coding agents
- Automated pentesting/vulnerability discovery
- Image/video/music generation
- Grammar checking
- Automated support agents (phone or chat)
- Autonomous weaponry
and so many more. Being Pro-AI could mean you like one or two application of the AI, but be against it in the others. I know very few people that like it for the use of media generation. However, there have been a lot of long time vulnerabilities in very popular open source projects that was only just discovered. That seems like a pretty undeniable use case demonstrating its usefulness.
Then of course there's governments that want to get their greedy blood thirsty hands on it to create autonomous weaponry. So now if you try to defend AI for a use case like defensively finding program vulnerabilities you somehow also have to defend AI weaponry?
For a generic AI model, it is very powerful and can either be used to grow yourself or abused so your brain doesn't have to work at all. You can use AI to do the hard work for you, or use it as a personal tutor to guide you into what to learn. People will of course mention hallucinations as why it can't be used to learn, but you don't have to take AI at its words. If you were to ask it to create a lesson plan on what you should study for a subject, in what order, and resources are available, you can do all of the actual learning using content the AI has no control over. So what you do with that is going to be up to the person, and opinions on it are going to vary wildly.
Some people argue any use case is not okay given the various concerns of energy and water usage, and where those models sourced their training data. Not to mention if you support AI you must be supporting the AI companies. I agree there are concerns for the environmental impact, and the training data discussion is a long one on its own. However, I do think you can support AI as a technology, and not be okay with the way the technology is being done in regards to environmental impact. And given AI can be done on a local machine, I don't think it has to be tied at all with the big tech at all.
"AI" is such a wide and immense topic. And what we talk about with AI today will not be relevant come next year with how quickly it is developing. We shall see if some form of Moore's law applied with the growth of AI as far as efficiency and quality of the AI goes.
One of the first things I say when non tech people ask me about ""AI"" is :
"The term AI here is just marketing wank"
Pro-AI people are a small minority in my experience, but are generally overrepresented in the tech geek communities that make up the majority of users on the fediverse. Anecdotally, I think that the vast majority of people are indifferent about AI, some of them may find it to be a novel replacement for web searching, but almost nobody is interested in paying for generative AI (as evidenced by the AI companies hemorrhaging cash). If you were to ask on a more creativity-centric community, you would find that anti-AI sentiment is near ubiquitous amongst the working creative class.
Sadly, there is a significant number of untalented and brainless fools who use unethical corporate AI models as a crutch to compensate for their lack of real-world skills and relationships.
But for as many people as there that claim to be pro-AI, you simply don't see people actively seek out AI-generated art, music, videos, or stories. I would argue that most of the consumers of AI content are people who have been unwittingly duped into reading/watching/listening to it
For reasons I can't quite understand, some AI fans are also deluded into believing that AI will somehow usher in a post-capitalist utopia, despite the obvious fact it is only further empowering and enriching the most wealthy tech companies and the oligarchs that control them.
AI psychosis is a documented problem.
Finally, pro-AI people are infinitely more likely to use AI to generate spam and proganda in support of their worldview than people who are against it. Are we supposed to believe people that have AI girlfriends are above using AI to write bogus posts and comments?
Also, for reasons I can’t quite understand, some AI fans are also deluded into believing that AI will somehow usher in a post-capitalist utopia, despite the obvious fact it is only further empowering and enriching the most wealthy tech companies and the oligarchs that control them.
Elon Musk is making his typical wild promises again, this time about AI leading to UBI and abundance for everyone ... as he makes money from xAI, of course.
This is nothing new actually, the same thing happend during the crypto boom.
There's slop users (autoclankers) and then there's researchers or developers actually doing the same stuff they've been doing for 5+ years.
I think it just seems that way because there's always a clash on practically every post.
Some people don't see the inherent flaw in outsourcing their physical thoughts to a cloud model, or the massive economic bubble they are helping to create.
But some people are doing some genuinely interesting things that would have otherwise been impossible several years ago just because AI and model training research got a huge boost for everyone the past few years.
My personal favorite is a drone that rapidly identifies and counts produce plant quality, output, issues, etc for large farms with some brand spanking new image models, and it costs about as much as maybe a new toolbox. No one wants to manually weed through hundreds of acres to count buds and try to catch problems before its too late. It's a great upgrade from doing random samples that misses a lot of data.
On the other hand, those opposed to AI also have a subgroup that wants anything and everything with AI in the name dead, without any regard to what it is or what it does.
It's like when you throw world and ml users into one post. They both think the other is louder, and also the big dumb lol.
On the other hand, those opposed to AI also have a subgroup that wants anything and everything with AI in the name dead, without any regard to what it is or what it does.
This might be a bit of a hot take, but I don't really see anything inherently wrong with this. The scientists and engineers will continue doing their serious work regardless of public opinion, and while some of them may have tangentially benefited from from increased interest and funding in the field, most of it is going to these corporate LLM models which are taking up all the oxygen in the room.
That's a bubble that needs to burst. I think it's more important to keep public sentiment rightfully focused in that direction. Let's face it, you're really not going to be able to educate the general public on these nuances. The field at large will persist regardless.
If you don't differentiate and keep the two in the same pot you won't be able to fund research into the useful stuff. It's true that consumer hype and research funding decisions are not the same, but they may be indirectly linked. A public fund may fear public outrage if it continues funding X millions of AI projects even if they're not LLM related.
So the reputation damage may affects viable, net positive applications.
I suppose it's due many people not seeing things as black or white, but as a variety of grays.
The kind of people who make hating AI part of their identity are pretty rare in the real world. Lemmy just creates the illusion that this loud minority's views are way more common than they actually are.
And as always, the "pro-AI" people aren't as much for it as the haters are against it. It's not a binary thing between the two extremes. Every real person I've talked to about AI has had a pretty neutral view on it and is usually well aware of its limitations. Even the ones who lean heavily on it aren't as passionate about it than the haters are.
I haven't talked to a lot of people about AI, but I'm extremely skeptical, and my wife, who isn't usually dialed into this sort of thing, fucking hates it. I'm not sure how that plays out across the general populace, but I'm inclined to think it's pretty unpopular.
Bots are trying to gaslight to into thinking that slop acceptance is inevitable. It's just bullshit. Everyone hates slop art. Everyone hates slop music. Everyone hates slop text. Everyone hates forced slop integration.
The only people that like AI are the people that own the chatbots that want to deskill you.
The kind of people who make hating AI part of their identity are pretty rare in the real world. Lemmy just creates the illusion that this loud minority's views are way more common than they actually are.
Yup, essentially every office worker at my company is pro-ai whereas shop workers have a bit more distain for it.
I got asked to organize shop drawings into categories so that they can feed their LLM data on the different types of products we produce, so long as it’s not someone’s personal information It genuinely doesn’t bother me.
It seems like its usually just one person just posting over and over or making alts (I assume, based on the fact they just reiterate the same arguments), rather than a coordinated effort.
I assume, based on the fact they just reiterate the same arguments
I saw someone else make this same argument. Can't believe you made an alt to post it again.
It's a sad state of affairs. Post anything that is going against the grain, you must be a bot or part of a coordinated attack...
Some people are unlearning the fact that different opinions exist :'(
Current AI is unsuitable, but automation of some kind (maybe not AI) will be necessary for a nearly workless future. Life is kind of dumb as is, it's better if we spent time in the gym, or doing yoga, or learning something, instead of spending life in the pesticide factory, then dying after 3 years of retirement from a horrific disease.
I hardly ever see them. I love being able to just set my home feed to subscribed communities.
The community is pretty split. I know a lot of people are going to think the accoints are bots. Maybe they are.
But ive met people in real life that truely believe in llm ai solving all thier problems. Its not true bur thats what they believe.
The fuckai crowd has always been a vocal minority, amplified by Lemmy’s small userbase. It was never going to last as the default message being heard.
Personally I think LLMs are pretty useful and run them on my PC occasionally. I’m more of a Fuck Corporate Datacentres kinda person.
I'm with ya there. I think a lot of the valid hate towards AI is is actually for Big Tech companies and data centers.
I support folks running less powerful locally hosted models on their own hardware, which I also do myself!
I have been expecting there to be some softening and some people who use AI for coding on the DL here. It really has gotten significantly more common to at least try out tools like Claude Code. But those people aren't writing articles like that and I'm not seeing them.
I’ve been encouraged to use Claude Code for work, and by a lot of genuinely very talented engineers. It’s absolutely overhyped if you look at twitter tech bros, and absolutely under hyped if you only read Lemmy.
AI hs already been demonstrated as a tool that largely benefits fascists and oligarchs. It is not a question at this point. At this point, all of the AI-evangelists are either extremwly stupid or fascists themselves.
Maybe it's just that the world isn't as uniform in their anti-AI opinion as you imagine it to be? Social media inherently forms bubbles, smaller platforms like the Fediverse even moreso than most. As the Fediverse grows opinions are likely to become more diverse.