this post was submitted on 17 Oct 2025
1111 points (98.6% liked)
Technology
76223 readers
3288 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Was it really "like that" for any length of time? To me it seems like most people just believed whatever bullshit they saw on Facebook/Twitter/Insta/Reddit, otherwise it wouldn't make sense to have so many bots pushing political content there. Before the internet it would be reading some random book/magazine you found, and before then it was hearsay from a relative.
I think that the people who did the research will continue doing the research. It doesn't matter if it's thru a library, or a search engine, or Wikipedia sources, or AI sources. As long as you know how to read the actual source, compare it with other (probably contradictory) information, and synthesize a conclusion for yourself, you'll be fine; if you didn't want to do that it was always easy to stumble upon misinfo or disinfo anyways.
One actual problem that AI might cause is if the actual scientists doing the research start using it without due diligence. People are definitely using LLMs to help them write/structure the papers ¹. This alone would probably be fine, but if they actually use it to "help" with methodology or other content... Then we would indeed be in trouble, given how confidently incorrect LLM output can be.
Yes, but that number is getting smaller. Where I live, most households rarely have a full bookshelf, and instead nearly every member of the family has a "smart" phone; they'll grab the chance to use anything that would be easier than spending hours going through a lot of books. I do sincerely hope methods of doing good research are still continually being taught, including the ability to distinguish good information from bad.
Internet (via your smartphone) provides you with the ability to find any book, magazine or paper on any subject you want, for free (if you're willing to sail under the right flag), within seconds. Of course noone has a full bookshelf anymore, the only reason to want physical books nowadays is sentimentality or some very specific old book that hasn't been digitized yet (but in that case you won't have it on your bookshelf and will have to go to the library anyway). The fastest and most accurate way of doing research today is getting a gist on Wikipedia, clicking through the source links and reading those, and combing through arxiv and scihub for anything relevant. If you are unfamiliar with the subject as a whole, you download the relevant book and read it. Of course noone wants to comb through physical books anymore, it's a complete waste of time (provided of course they have been digitized).