hedgehog

joined 2 years ago
[–] hedgehog@ttrpg.network 3 points 2 weeks ago

Wow, there isn’t a single solution in here with the obvious answer?

You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

  1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
  2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
  3. Add Jellyfin as a service and route in your reverse proxy’s config.

On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.

[–] hedgehog@ttrpg.network 4 points 3 weeks ago (1 children)

Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

  • q4_K_M (the default): 43 GB
  • fp16: 141 GB
  • q8: 75 GB
  • q6_K: 58 GB
  • q5_k_m: 50 GB
  • q4: 40 GB
  • q3_K_M: 34 GB
  • q2_K: 26 GB

This is why I run a lot of Q4_K_M 70B models on two 3090s.

Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.

[–] hedgehog@ttrpg.network 6 points 3 weeks ago (1 children)

I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.

[–] hedgehog@ttrpg.network 7 points 3 weeks ago (1 children)

From https://www.yalemedicine.org/news/covid-vaccines-reduce-long-covid-risk-new-study-shows

At the pandemic’s onset, approximately 10% of people who suffered COVID-19 infections went on to develop Long COVID. Now, the risk of getting Long COVID has dropped to about 3.5% among vaccinated people (primary series).

...

Then, the team conducted analyses to uncover the reasons for the observed decline in Long COVID cases from the pre-Delta to Omicron eras. About 70% of the decline was attributable to vaccination, they found.

[–] hedgehog@ttrpg.network 3 points 3 weeks ago (1 children)

The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.

[–] hedgehog@ttrpg.network 3 points 1 month ago (2 children)

Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?

[–] hedgehog@ttrpg.network 3 points 1 month ago (3 children)

Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.

[–] hedgehog@ttrpg.network 1 points 1 month ago

Depending on setup this can be true with Jellyfin, too. I have a domain registered, use dynamic DNS, and have Traefik direct a subdomain to my Jellyfin server. My mobile clients are configured using that. My local clients use the local static IP.

If my internet goes down, my mobile clients can’t connect, even on the LAN.