hedgehog

joined 2 years ago
[–] hedgehog@ttrpg.network 1 points 1 week ago

This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile

x-logging:
  &default-logging
  options:
    max-size: '10m'
    max-file: '5'
  driver: json-file

services:
  mongo:
    image: mongo:4.4
    volumes:
      - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
    logging: *default-logging

  nightscout:
    image: nightscout/cgm-remote-monitor:latest
    container_name: nightscout
    restart: always
    depends_on:
      - mongo
    logging: *default-logging
    ports:
      - 1337:1337
    environment:
      ### Variables for the container
      NODE_ENV: production
      TZ: [removed]

      ### Overridden variables for Docker Compose setup
      # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
      # and manage TLS certificates
      INSECURE_USE_HTTP: 'true'

      # For all other settings, please refer to the Environment section of the README
      ### Required variables
      # MONGO_CONNECTION - The connection string for your Mongo database.
      # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
      # The default connects to the `mongo` included in this docker-compose file.
      # If you change it, you probably also want to comment out the entire `mongo` service block
      # and `depends_on` block above.
      MONGO_CONNECTION: mongodb://mongo:27017/nightscout

      # API_SECRET - A secret passphrase that must be at least 12 characters long.
      API_SECRET: [removed]

      ### Features
      # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
      # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
      ENABLE: careportal rawbg iob

      # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
      # When readable, anyone can view Nightscout without a token. Setting it to denied will require
      # a token from every visit, using status-only will enable api-secret based login.
      AUTH_DEFAULT_ROLES: denied

      # For all other settings, please refer to the Environment section of the README
      # https://github.com/nightscout/cgm-remote-monitor#environment

[–] hedgehog@ttrpg.network 1 points 1 week ago (1 children)

To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,

services:
  nightscout:
    ports:
      - 3000:3000

You can remove the labels as those are used by Traefik, as well as the Traefik service itself.

Then just point Nginx to that port (e.g., 3000) on your local machine.

—-

Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.

[–] hedgehog@ttrpg.network 1 points 2 weeks ago

It’s not “dark green,” that’s for sure.

[–] hedgehog@ttrpg.network 1 points 1 month ago (1 children)

I thought Hue bulbs used Zigbee?

[–] hedgehog@ttrpg.network 3 points 1 month ago (1 children)

The up arrow moves through the letters, e.g., A->B->C. The down arrow moves to the next character in the sequence, e.g., C->CA->CAA. If you click past the correct letter, you’ll have to click all the way through again. And if you submit the wrong letter, you have to start all over (after it takes twenty seconds attempting to connect with the wrong password and then alerts you that it didn’t work, of course).

[–] hedgehog@ttrpg.network 2 points 1 month ago

The products currently on the marketplace have architectures that are far more sophisticated than just an LLM. Even something as simple as “Deep Research,” which both Anthropic and Claude have available, is using multiple interconnected systems to provide a single response.

Consider Agentic AI, like Claude Code, where they’re using tools, analyzing the results of those tools, iterating, possibly calling out to MCP servers to do other things, etc.. The tools allow them to do things like read or modify files in the working directory, execute programs (i.e., your linter, installing dependencies, running your app), querying against your app itself, and so on.

And of course note that the single “Claude” box in that diagram has an architecture that’s more sophisticated than just being an LLM. At minimum, consumer facing LLMs generally have a supervisor that censors problematic inputs and outputs; this doesn’t make the system more competent but the same concept can be applied to any other sort of transparent wrapper.

It seems to me that we already have consumer systems that are doing what you described, and we’re already working on enhancing their architectures further.

[–] hedgehog@ttrpg.network 10 points 1 month ago (5 children)

OP is also in the allegedly ultra rare camp of “successfully configured Jellyfin and lived to tell the tale.” Not what I’d expect of someone unable to configure Plex correctly. I’ve not set up a Plex server myself but my guess is it wasn’t clear that it was misconfigured - it did work previously, after all.

[–] hedgehog@ttrpg.network 22 points 1 month ago (12 children)

If they’re calling it remote streaming when you’re on the same (local) network, that’s not exactly intuitive. I’d say OP’s phrasing was fair.

[–] hedgehog@ttrpg.network 18 points 2 months ago

You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.

[–] hedgehog@ttrpg.network 1 points 2 months ago (1 children)

I think the better question than “Does the experience system sound like it has potential,” then, is “Does the overall concept / system have potential?”

My gut is probably, but it depends a lot more on what you’re willing to put into it and what you want out of it. What’s your metric for success? If it’s something you want to run yourself and to share online to have a few groups use it, then that’s a lot more achievable than being able to get a publishing deal, for example. And in-between, publishing on drivethrurpg or something similar, at a nominal cost (like $2-$5), would take more effort than the former and less than the latter; and the higher the cost and the higher the number of players you’d want, the higher the effort you need to put in (and a lot of that isn’t just in system building, but in art, community building, marketing, etc.).

From what you’ve shared, it sounds like an interesting system. I could especially see it working in an academy setting where grinding skills to be able to pass practical exams is one of the players’ goals. I also could see it working well by a loosely GMed play by post system, with the players self-enforcing (or possibly leveraging some tools built into the site to track resource pools, experience, rolling, etc.), though I haven’t played in a forum game myself, so I might be way off-base.

Did your system have classes or was it completely free-form in terms of gaining access to those skill trees?

[–] hedgehog@ttrpg.network 3 points 2 months ago (4 children)

I run a Monster of the Week game and my players get experience throughout sessions, as well as at the end. The mechanics are basically:

  • It takes 5 experience points to level up.
  • If you fail a roll, you get an experience point.
  • If you level up, you get the benefit immediately.
  • At the end of the session, everyone gets 0-2 experience points.

I think other PbtA (Powered by the Apocalypse - systems inspired by Apocalypse World) systems do something similar.

I grew increasingly frustrated with the system of only distributing advancement/experience points at the end of a session.

Isn’t the simple fix to this to just distribute experience points as soon as they’re earned?

At some point, I started to divise a play system that relied on a split experience atribution system, with players being able to automatically rack experience points from directly using their skills/habilties, while the DM would keep a tally of points from goals/missions achieved, distributable at session end.

Your system sounds like the way that skill-based video game RPGs (Elder Scrolls games and Arcanum come to mind) handle experience.

In a lot of games I’ve played, I’d rather get experience for in-game accomplishments immediately and to be able to train skills like this during downtime - generally between games.

To those with more experience in TTRPGs: would this be feaseable? Or enticing? Interesting?

I could see people being interested in it. You get instant gratification and a bit of extra crunchiness. A lot of players enjoy that.

With the right skill system I could see this being useful. My main concern is that if you put this on top of a system with relatively few skills, it could encourage people to game it by grinding. There are ways to mitigate that, though.

In a system with fewer skills, instead of just being experience points, the “currency” you earned this way could be used for temporary power ups related to the skill in question.

You could also limit it so you only rewarded players for story-related tasks.

[–] hedgehog@ttrpg.network 3 points 3 months ago

Wow, there isn’t a single solution in here with the obvious answer?

You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

  1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
  2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
  3. Add Jellyfin as a service and route in your reverse proxy’s config.

On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.

view more: next ›