this post was submitted on 17 Oct 2025
990 points (98.6% liked)

Technology

76133 readers
2923 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] SpaceCowboy@lemmy.ca 32 points 13 hours ago (2 children)

If this AI stuff weren't a bubble and the companies dumping billions into it were capable of any long term planning they'd call up wikipedia and say "how much do you need? we'll write you a cheque"

They're trying to figure out nefarious ways of getting data from people and wikipedia literally has people doing work to try to create high quality data for a relatively small amount of money that's very valuable to these AI companies.

But nah, they'll just shove AI into everything blow the equivalent of Wikipedia's annual budget in a week on just electricity to shove unwanted AI slop into people's faces.

[–] nova_ad_vitum@lemmy.ca 4 points 5 hours ago

But nah, they'll just shove AI into everything blow the equivalent of Wikipedia's annual budget in a week on just electricity to shove unwanted AI slop into people's faces.

You're off my several order of magnitude unfortunately. Tech giants are spending the equivalent of the entire fucking Apollo program on various AI investments every year at this point.

[–] Suffa@lemmy.wtf 15 points 11 hours ago (1 children)

Because they already ate through every piece of content on wikipedia years and years ago. They're at the stage where they've trawled nearly the entire internet and are running out of content to find.

[–] fishy@lemmy.today 8 points 8 hours ago

So now the AI trawls other AI slop, so it's essentially getting inbred. So they literally need you to subscribe to their AI slop so they can get new data directly from you because we're still nowhere near AGI.

[–] utopiah@lemmy.world 35 points 15 hours ago (3 children)

(pasting a Mastodon post I wrote few days ago on StackOverflow but IMHO applies to Wikipedia too)

"AI, as in the current LLM hype, is not just pointless but rather harmful epistemologically speaking.

It's a big word so let me unpack the idea with 1 example :

  • StackOverflow, or SO for shot.

So SO is cratering in popularity. Maybe it's related to LLM craze, maybe not but in practice, less and less people is using SO.

SO is basically a software developer social network that goes like this :

  • hey I have this problem, I tried this and it didn't work, what can I do?
  • well (sometimes condescendingly) it works like this so that worked for me and here is why

then people discuss via comments, answers, vote, etc until, hopefully the most appropriate (which does not mean "correct") answer rises to the top.

The next person with the same, or similar enough, problem gets to try right away what might work.

SO is very efficient in that sense but sometimes the tone itself can be negative, even toxic.

Sometimes the person asking did not bother search much, sometimes they clearly have no grasp of the problem, so replies can be terse, if not worst.

Yet the content itself is often correct in the sense that it does solve the problem.

So SO in a way is the pinnacle of "technically right" yet being an ass about it.

Meanwhile what if you could get roughly the same mapping between a problem and its solution but in a nice, even sycophantic, matter?

Of course the switch will happen.

That's nice, right?.. right?!

It is. For a bit.

It's actually REALLY nice.

Until the "thing" you "discuss" with maybe KPI is keeping you engaged (as its owner get paid per interaction) regardless of how usable (let's not even say true or correct) its answer is.

That's a deep problem because that thing does not learn.

It has no learning capability. It's not just "a bit slow" or "dumb" but rather it does not learn, at all.

It gets updated with a new dataset, fine tuned, etc... but there is no action that leads to invalidation of a hypothesis generated a novel one that then ... setup a safe environment to test within (that's basically what learning is).

So... you sit there until the LLM gets updated but... with that? Now that less and less people bother updating your source (namely SO) how is your "thing" going to lean, sorry to get updated, without new contributions?

Now if we step back not at the individual level but at the collective level we can see how short-termist the whole endeavor is.

Yes, it might help some, even a lot, of people to "vile code" sorry I mean "vibe code", their way out of a problem, but if :

  • they, the individual
  • it, the model
  • we, society, do not contribute back to the dataset to upgrade from...

well I guess we are going faster right now, for some, but overall we will inexorably slow down.

So yes epistemologically we are slowing down, if not worst.

Anyway, I'm back on SO, trying to actually understand a problem. Trying to actually learn from my "bad" situation and rather than randomly try the statistically most likely solution, genuinely understand WHY I got there in the first place.

I'll share my answer back on SO hoping to help other.

Don't just "use" a tool, think, genuinely, it's not just fun, it's also liberating.

Literally.

Don't give away your autonomy for a quick fix, you'll get stuck."

originally on https://mastodon.pirateparty.be/@utopiah/115315866570543792

[–] amzd@lemmy.world 5 points 11 hours ago

Most importantly, the pipeline from finding a question on SO that you also have, to answering that question after doing some more research is now completely derailed because if you ask an AI a question and it doesn’t have a good answer you have no way to contribute your eventual solution to the problem.

[–] ThirdConsul@lemmy.ml 11 points 13 hours ago* (last edited 13 hours ago) (1 children)

I honestly think that LLM will result in no progress made ever in computer science.

Most past inventions and improvements were made because of necessity of how sucky computers are and how unpleasant it is to work with them (we call it "abstraction layers"). And it was mostly done on company's dime.

Now companies will prefer to produce slop (even more) because it will hope to automate slop production.

[–] I3lackshirts94@lemmy.world 7 points 13 hours ago

As an expert in my engineering field I would agree. LLMs has been a great tool for my job in being better at technical writing or getting over the hump of coding something every now and then. That’s where I see the future for ChatGPT/AI LLMs; providing a tool that can help people broaden their skills.

There is no future for the expertise in fields and the depth of understanding that would be required to make progress in any field unless specifically trained and guided. I do not trust it with anything that is highly advanced or technical as I feel I start to teach it.

load more comments (1 replies)
[–] ChaoticEntropy@feddit.uk 27 points 16 hours ago* (last edited 14 hours ago) (4 children)

AI will inevitably kill all the sources of actual information. Then all we're going to be left with is the fuzzy learned version of information plus a heap of hallucinations.

What a time to be alive.

load more comments (4 replies)
[–] kazerniel@lemmy.world 23 points 17 hours ago (1 children)

“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”

I understand the donors aspect, but I don't think anyone who is satisfied with AI slop would bother to improve wiki articles anyway.

[–] drspawndisaster@sh.itjust.works 21 points 17 hours ago (1 children)

The idea that there's a certain type of person that's immune to a social tide is not very sound, in my opinion. If more people use genAI, they may teach people who could have been editors in later years to use genAI instead.

[–] kazerniel@lemmy.world 8 points 17 hours ago (3 children)

That's a good point, scary to think that there are people growing up now for whom LLMs are the default way of accessing knowledge.

load more comments (3 replies)
[–] xylogx@lemmy.world 7 points 14 hours ago (3 children)

Every time someone visits Wikipedia they make exactly $0. In fact, it costs them money. Are people still contributing and/or donating? These seem like more important questions to me.

[–] possumparty@lemmy.blahaj.zone 5 points 10 hours ago

yeah, i drop a $20-25 donation yearly.

[–] patatahooligan@lemmy.world 6 points 12 hours ago

There are indirect benefits to visitors, though. Yes, most people are a drain on resources because they visit strictly to read and never to contribute. The minority that do contribute, though, are presumably people who used Wikipedia and liked it, or people who enjoy knowing that other people are benefiting from their contributions. I'm not sure people will donate or edit on Wikipedia if they believe no one is using it.

[–] DMCMNFIBFFF@lemmy.world 6 points 13 hours ago (1 children)

I'd make a cash donation right now if I could.

[–] vulgarcynic@sh.itjust.works 6 points 11 hours ago

I got you fam. I’ve been making a decent monthly donation for years. Consider one of those on your behalf!

[–] Treczoks@lemmy.world 23 points 18 hours ago

Not me. I value Wikipedia content over AI slop.

[–] llama@lemmy.zip 8 points 15 hours ago

Yet I still have to go to the page for the episode lists of my favorite TV shows because every time I ask AI which ones to watch it starts making up episodes that either don't exist or it gives me the wrong number.

[–] Mrkawfee@feddit.uk 22 points 20 hours ago* (last edited 20 hours ago) (3 children)

I asked a chatbot scenarios for AI wiping out humanity and the most believable one is where it makes humans so dependent and infantilized on it that we just eventually die out.

load more comments (3 replies)
[–] Tylerdurdon@lemmy.world 2 points 12 hours ago

Oh I didn't mean change the current setup. Create a standalone tool that better uses the wiki framework so people can access it in a different way, that's all.

[–] cupcakezealot@piefed.blahaj.zone 13 points 19 hours ago (3 children)

all websites should block ai and bot traffic on principle.

[–] maniacalmanicmania@aussie.zone 15 points 18 hours ago (1 children)

The problem is many no longer identify as bots and come from hundreds if not thousands of IPs.

load more comments (1 replies)
[–] kent_eh@lemmy.ca 1 points 10 hours ago* (last edited 10 hours ago)

all websites should block ai and bot traffic on principle.

Increasing numbers do.

But there is no proof that the LLM trawling bots are willing to respect those blocks.

load more comments (1 replies)
[–] anticurrent@sh.itjust.works 11 points 19 hours ago* (last edited 19 hours ago) (4 children)

I am kinda a big hater on AI and what danger it represents to the future of humanity

But. as a hobby programmer, I was surprised at how good these llms can answer very technical questions and provide conceptual insight and suggestions about how to glue different pieces of software together and which are the limitations of each one. I know that if AI knows about this stuff it must have been produced by a human. but considering the shitty state of the internet where copycat website are competing to outrank each other with garbage blocks of text that never answer what you are looking for. the honest blog post is instead burried at the 99 page in google search. I can't see how old school search will win over.

Add to that I have found forums and platforms like stack overflow to be not always very helpful, I have many unanswered questions on stackoverflow piled-up over many years ago. things that llms can answer in details in just seconds without ever being annoyed at me or passing passive aggressive comments.

[–] kent_eh@lemmy.ca 1 points 10 hours ago* (last edited 10 hours ago)

I know that if AI knows about this stuff it must have been produced by a human.

For now. Maybe.

It won't be long before these LLMs will start ingesting the output from other LLMs, biases, confidently wrong answers, hallucinations and all.

load more comments (3 replies)
load more comments
view more: next ›