this post was submitted on 23 Dec 2025
830 points (97.6% liked)

Technology

77979 readers
2484 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] azvasKvklenko@sh.itjust.works 17 points 2 days ago (13 children)

Oh, so my sceptical, uneducated guesses about AI are mostly spot on.

[–] IAmNorRealTakeYourMeds@lemmy.world 8 points 2 days ago (12 children)

As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.

However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic "Sanity check" that is important in programming.

[–] robobrain@programming.dev 9 points 2 days ago (1 children)

The Turing test says more about the side administering the test than the side trying to pass it

Just because something can mimic text sufficiently enough to trick someone else doesn't mean it is capable of anything more than that

[–] IAmNorRealTakeYourMeds@lemmy.world 2 points 2 days ago (1 children)

We can argue about it's nuances. same with the Chinese room thought experiment.

However, we can't deny that it the Turing test, is no longer a thought exercise but a real test that can be passed under parameters most people would consider fair.

I thought a computer passing the Turing test would have more fanfare, about the morality if that problem, because the usual conclusion of that thought experiment was "if you cant tell the difference, is there one?", but now it has become "Shove it everywhere!!!".

[–] M0oP0o@mander.xyz 5 points 2 days ago (1 children)

Oh, I just realized that the whole ai bubble is just the whole "everything is a dildo if you are brave enough."

[–] IAmNorRealTakeYourMeds@lemmy.world 3 points 2 days ago* (last edited 2 days ago) (1 children)

yhea, and "everything is a nail if all you got is a hammer".

there are some uses for that kind of AI, but very limiting. less robotic voice assisants, content moderation, data analysis, quantification of text. the closest thing to Generative use should be to improve auto complete and spell checking (maybe, I'm still not sure on those ones)

[–] M0oP0o@mander.xyz 2 points 2 days ago (1 children)

I was wondering how they could make autocomplete worse, and now I know.

[–] IAmNorRealTakeYourMeds@lemmy.world 2 points 2 days ago* (last edited 2 days ago) (1 children)

In theory, I can imagine an LLM fine tuned on whatever you type. which might be slightly better then the current ones.

emphasis on the might.

[–] M0oP0o@mander.xyz 1 points 2 days ago (1 children)

Well right now I have autocorrect changing real words for jumbles of letters due to years of myself working with acronyms and autocomplete changing words like both to bitch, for to fuck, etc. due to systems changing less used words for more used words (making the issue worse).

on top of that, an LLM can check if the sentence makes sense.

like in the previous post, where I accidentally started with "I'm theory" because I use Swype typing and using an LLM to predict the following tokens for the keyboard to know what I likely trying to say.

load more comments (10 replies)
load more comments (10 replies)