I've been meaning to try Logdy out. Thanks for the reminder!
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
lmao this is exactly what I've been lookin for... Thanks! I just knew if I was a lazy fuck and sat on my hands someone would do the work for me eventually!
Glad to help! XD
https://moonpiedumplings.github.io/playground/ccdc-logs/
I played around with some non-elasticsearch web/gui based solutions as well.
I can attest to Lnav being great, short of implementing a full Grafana/Loki stack (which is what i use for most of my infrastructure).
Lnav makes log browsing/filtering in the terminal infinitely more enjoyable.
I can attest to Lnav being great
I'm sitting here running it through some logs. So far, it's on top of the stack.
Those two look pretty interesting. Thanks, I'll check them out.
I installed Grafana, simply because it was the only one I had heard of, and I figured that becoming familiar with it was probably useful from a professional development standpoint.
It's definitely massive overkill for my use case, though, and I'm looking to replace it with something else.
I'll be the first to admit that I'm a sucker for dialed out dashboards. However, logs are confusing enough for me. LOL I need just the facts ma'am. Graphana is a great package tho, useful for a lot of metrics.
I use Victoria Logs, with vector as the log forwarding agent
Dozzle, log forge is a new one I've seen but not tried.
It is my understanding that while you can use Dozzle to view other logs besides Docker logs, you have to deploy separate instances. While Dozzle is awesome, I'm not sure I want to spin up 5 or 6 separate Dozzle instances. I do use Dozzle a lot for Docker logs and it's fantastic for that.
The backup is a self hosted splunk.
Saw a posting this past week on SSD drive failures. They're blaming a lot of it on 'over-logging' -- too much writing trivial, unnecessary data to logs. I imagine it gets worse when realtime data like OpenTelemetry get involved.
Until I saw that, never thought there was such a thing as 'too much logging.' Wonder if there are any ways around it, other than putting logs on spinny disks.
Oh I'm not moving that much data to log, and the logs I read are all the normal stuff, nothing exotic. I guess if it were a huge cooperation, that had every Nagios plugin known to man and logging/log-rotating that because of logs, yeah I guess.
That would be wild if it was caused by logging, even a cheap piece of crap SSD is usually rated for 500TBW. Even if you were generating 1TB of logs per month that would still be 41 years before it wears out.
My ebay used enterprise SSDs are rated for 3.6PBW, and they were cheaper than a basic consumer Samsung drive at the time.
Wow, you just gave me flashbacks to my first Linux/unix job in 2008. Tripwire and logwatch reports to review every morning.
Can you clarify what your concern is with "heavy" logging solutions that require database/elasticsearch? If you're worried about system resources that's one thing, but if it's just that it seems "complicated," I have a docker compose file that handles Graylog, Opensearch, and Mongodb. Just give it a couple of persistent storage volumes, and it's good to go. You can send logs directly to it with syslog or gelf, or set up a filebeat container to ingest file logs.
There's a LOT you can do with it once you've got your logs into the system, but you don't NEED to do anything else. Just something to consider!
If you’re worried about system resources that’s one thing
My thoughts were that, even tho I know Graylog, et al are fantastic apps, if I could get away with something light, like Logwatch and lnav, that would allow me to read logs fairly easy and lighter on resources, I could channel those resources to other projects. I'm working from a remote VPS with 32 gb RAM, so yes I can run the big apps, and I know just enough about Docker so that it's not way over my head as far as complicated. This particular VPS has only one user, so I'm not generating tons of user logs etc. IDK, it all made sense when I was thinking about it. LOL I do like a nice, dialed out UI tho.
I have a docker compose file that handles Graylog, Opensearch, and Mongodb
I certainly would like the opportunity to take a look at it, maybe run it on my test server and see how it does.
'presh
Here you go. I commented out what is not necessary. There are some passwords noted that you'll want to set to your own values. Also, pay attention to the volume mappings... I left my values in there, but you'll almost certainly need to change those to make sense for your host system. Hopefully this is helpful!
services:
mongodb:
image: "mongo:6.0"
volumes:
- "/mnt/user/appdata/mongo-graylog:/data/db"
# - "/mnt/user/backup/mongodb:/backup"
restart: "on-failure"
# logging:
# driver: "gelf"
# options:
# gelf-address: "udp://10.9.8.7:12201"
# tag: "mongodb"
opensearch:
image: "opensearchproject/opensearch:2.13.0"
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "action.auto_create_index=false"
- "plugins.security.ssl.http.enabled=false"
- "plugins.security.disabled=true"
- "OPENSEARCH_INITIAL_ADMIN_PASSWORD=[yourpasswordhere]"
ulimits:
nofile: 64000
memlock:
hard: -1
soft: -1
volumes:
- "/mnt/user/appdata/opensearch-graylog:/usr/share/opensearch/data"
restart: "on-failure"
# logging:
# driver: "gelf"
# options:
# gelf-address: "udp://10.9.8.7:12201"
# tag: "opensearch"
graylog:
image: "graylog/graylog:6.2.0"
depends_on:
opensearch:
condition: "service_started"
mongodb:
condition: "service_started"
entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 -- /docker-entrypoint.sh"
environment:
GRAYLOG_TIMEZONE: "America/Los_Angeles"
TZ: "America/Los_Angeles"
GRAYLOG_ROOT_TIMEZONE: "America/Los_Angeles"
GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
GRAYLOG_PASSWORD_SECRET: "[anotherpasswordhere]"
GRAYLOG_ROOT_PASSWORD_SHA2: "[aSHA2passwordhash]"
GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200/"
GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
ports:
- "5044:5044/tcp" # Beats
- "5140:5140/udp" # Syslog
- "5140:5140/tcp" # Syslog
- "5141:5141/udp" # Syslog - dd-wrt
- "5555:5555/tcp" # RAW TCP
- "5555:5555/udp" # RAW UDP
- "9000:9000/tcp" # Server API
- "12201:12201/tcp" # GELF TCP
- "12201:12201/udp" # GELF UDP
- "10000:10000/tcp" # Custom TCP port
- "10000:10000/udp" # Custom UDP port
- "13301:13301/tcp" # Forwarder data
- "13302:13302/tcp" # Forwarder config
volumes:
- "/mnt/user/appdata/graylog/data:/usr/share/graylog/data/data"
- "/mnt/user/appdata/graylog/journal:/usr/share/graylog/data/journal"
- "/mnt/user/appdata/graylog/etc:/etc/graylog"
restart: "on-failure"
volumes:
mongodb_data:
os_data:
graylog_data:
graylog_journal:
Dude! Thanks so much. You're very generous with your time. I guess now I have no choice nor excuse. I'll run it up the flag pole sometime this weekend,
My pleasure! Getting this stuff together can be a pain, so I'm always trying to pay it forward. Good luck and let me know if you have any questions!