PlasticExistence

joined 2 years ago
[–] PlasticExistence@lemmy.world -1 points 1 month ago (3 children)

It might surprise you to know that you’re not entitled to a free education from me. Your original query of “What’s the difference?” is what I responded to willingly. Your philosophical exploration of the nature of intelligence is not in the same ballpark.

I’ve done vibe coding too, enough to understand that the LLMs don’t think.

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

[–] PlasticExistence@lemmy.world -1 points 1 month ago (5 children)

Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.

[–] PlasticExistence@lemmy.world 3 points 1 month ago

Performance has been better for me too. I keep both installed on my media server, but I hope one day that I can easily ditch plex

[–] PlasticExistence@lemmy.world 2 points 1 month ago (7 children)

I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.

[–] PlasticExistence@lemmy.world 15 points 1 month ago* (last edited 1 month ago) (21 children)

Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.

AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.

If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.

The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.

And when they start hallucinating, it’s because they don’t understand how they sound, and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.

[–] PlasticExistence@lemmy.world 4 points 1 month ago* (last edited 1 month ago) (1 children)

I get that. I prefer to use what just works. Plex has for a while been moving towards putting barriers around using it locally. Jellyfin has less polish, but none of the bullshit.

[–] PlasticExistence@lemmy.world 2 points 1 month ago

It increases on April 29th, so if you still want to buy a lifetime pass at the current rate, you have until then.

[–] PlasticExistence@lemmy.world 5 points 1 month ago (4 children)

There’s an extension/plugin that you can install for that. I have been meaning to do that, but I keep forgetting, so I can’t say how good it is.

[–] PlasticExistence@lemmy.world 10 points 1 month ago (3 children)

This is my experience too. The web interface is usable, but a bit rough. It is a lot like early Plex web UI. The options for clients are okay on Android / Google TV but they are kinda bad on Apple TV.

Hopefully as more people discover Jellyfin interest in development of both the server and the clients will surpass Plex.

I appreciate what Plex has offered for free for many years now, and I was once a subscriber, but I don’t love it anymore because I’m looking for the straightest path to watching my library on my devices. Jellyfin delivers this better most of the time.

[–] PlasticExistence@lemmy.world 1 points 1 month ago

Ironic username, but no, there are none righteous

view more: next ›