ar1

joined 3 weeks ago
[โ€“] ar1@lemmy.sdf.org 13 points 16 hours ago

AI uses data to "guess" the most possible outcome. LLM uses that to pick the guess that has the highest probability to "sound correct" to human, and it is affected greatly by the data it used to train.

One thing which is very different is that AI/LLM doesn't take responsibility of what they say. Depending on their training data, they may tell someone to kill themselves if the human has incurable disease and ask for possible treatments. It is definitely odd if ever happens in human conversation. But because you don't like the answer and don't think it is "correct", you will say the AI is "hallucinating".

Like talking to a lion, you can mimic a roar but it's up to the lion to decide if it sounds nice or rude...

[โ€“] ar1@lemmy.sdf.org 3 points 2 weeks ago

Librem 5 on its own, and with the optional lapdock for serious outdoor work?