Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I am in general pro-AI and far from being a teenager. The important difference is that i can acknowledge that not everything is fine.
For example, i do see no harm in self-hosting your LLM, or in things like Horde AI, where people share their resources for Imagegen. I also think that machine learning in itself is simply a technique like any other CS method.
I DO see harm in the way that Corpos try to push LLMs in every nook they can find (nothing wrong with experimenting inhouse, but lots of wrong when rolling out "features" noone wants or needs, and which can never be cost effective). It's also very much not fine how this technology has been marketed (especially by Sam Altman) as more than a small stepping stone towards AGI - the expectation (and the correlating real-life consequences) that LLMs in their current state can actually replace human workers cannot hold up when taking a critical look at it. The current push away from open source models is also a bad thing.
All of this leads to very unhealthy things, like unsustainable growth/building of datacenters that will probably not be needed in this form, elimination of entry-level (or worse, even senior) jobs which will bite the industry in the ass sooner or later, and people who seemingly aren't capable of seeing that production of language isn't the same as intelligence or conscience.
I have said before that i would suggest an UNESCO-founded open model with the option to opt out for whoever doesn't want their data in this "world heritage model". It's the worlds combined output that is used, so it should be free to use for private usage, and with licencing terms for organizations funding an stipend / UBI for the arts.
Edit: I don't really care about the downvotes, but i would prefer articulated responses over kneejerk reactions.
I've seen that some anti-AI folks get very irrationally angry when you don't share their hate towards AI. It often degenerates into personal attacks, as in "how could you be so stupid as to not share my views?", which translates into the knee-jerk reactions you mention. There's no searching for nuance.
For the articulated response, I agree with you though. I'll copy a comment from another thread that I think shines a really valuable light on the issue: