foremanguy92_

joined 1 year ago
[–] foremanguy92_@lemmy.ml 2 points 2 days ago

Little subquestion how fast is your nextclous instance? Cause mine is pretty slow don't really know why

[–] foremanguy92_@lemmy.ml 1 points 6 days ago (1 children)

Not your question sorry, but since your hosting next cloud what's your experience with it, because I find mine pretty slow and not really smooth

[–] foremanguy92_@lemmy.ml 2 points 6 days ago* (last edited 6 days ago)

So LAN speed is 1000 WAN speed (get from mainstream speed test) is 500/500

When I'm saying VPN I basically mean access my homelab from outside, so I'm talking about commercial VPN.

All my homelab is in one subnet (included all my other devices). When accessing the speed test locally trough domain it simply resolve it with public DNS

Hope I answer your questions

EDIT : does 500/500 WAN on regular speed test means that I can have 500 on up and down at the same time, or does that means that I can only get for example 250 each at the same time?

[–] foremanguy92_@lemmy.ml 3 points 2 weeks ago

Hi, first congrats for going the way of homelabing.

Like you first the hardware :

The elitedesk are great lines of prebuilt PCs mainly for little home servers BUT I wouldn't recommend to you to take the mini version as it's very very tiny and therefore doesn't have great modularity nor upgradeability.

You don't need to take massive servers or towers but the SFF versions of these or the normal version (starting to get big) are way better and will permit to you to have more space to tweak it and more generally have some place to put storage or else.

But if you can't allow yourself to have at least a tiny bit bigger that's okay and you can stay with the mini version that's not a dummy choice.

For the storage depending on what you're going to run in 5 years, 120GB could be not enough, adding the backups, you should consider buying at least 256 to 512GB of ssd (preferable for system (SATA or NVME whatsoever)). When it comes to raw and dummy storage, use hard drive, old schooled at first glance they are dirt cheap when getting them on discount. For storing only some videos, photos and music, 2TB usable is nice and making it mirrored (RAID 1) is nice too. But maybe (if one day comes the idea off having larger sizes) using RAID 5 could be nice as you could expend storage easily, you cannot really adapt RAID 1 to RAID 5 without manually doing backups and restoring them.

So buy some hard disks, if you want, you can buy them used (around 15-20 bucks for 2TB good used hard drive). Or you can buy them refurbished or new as you wish. When it comes to network storage hard disks are the best as you basically can't max out basic NVME drives with your network, basic ones are at around 3000MiB/s so that means 24,000Mib/s of bandwidth so you would need a 25G network (thing that I think you don't have).

And using more reasonable sized PCs are going to help you fitting all your drives, and maybe putting external NICs in there.

Secondly the software.

Using docker to easily selfhost is a great idea but I really don't like portainer and mainly the way they manage docker container.

So I would suggest you 2 things if you want to get a bit into tech simply deploy your docker containers with docker compose file, once into you'll see that it's very simple.

But if you prefer a simpler approach while not giving up features, as you said you're a father (congrats), I wouldn't recommend to you YunoHost it's a out-of-the-box platform to self host stuff very easily without pretty much technical knowledge.

If the apps are just for you and your wife (pretty close people) using a VPN that give access people to your whole local network (for really close people) or setting up an overlay VPN like tailscale (and selfhost headscale or use netbird) would be nice and pretty straightforward.

If you prefer to make it available online you can also reverse proxy services to make it open to the www from your IP, or use Cloud flare tunnels (don't like the idea of having cloudflare snipping out all my traffic) or you can use a vps to do the kinda same thing as with cloudflare tunnels without having them on your shoulders.

That's it for me, hope I guided you, and feel free to ask questions if you wish. Great homelabing journey to you! :)

 

I found about Usenet and the sort of "hype" relative to it and I wonder some stuff.

Maybe I'm a bit old school but is this process not against the spirit of piracy,

  • be generous to share
  • fight against censorship and DCMA takedown
  • work in a decentralized way

Just wondered it, if someone wants to give his opinion

[–] foremanguy92_@lemmy.ml 1 points 2 months ago

Thank you gonna check it

[–] foremanguy92_@lemmy.ml 8 points 2 months ago

Good idea for normal people that are not really knowing how and what to put on such a device

[–] foremanguy92_@lemmy.ml 3 points 2 months ago

Said *without docker

[–] foremanguy92_@lemmy.ml 2 points 2 months ago

Basically wants to setup SearXNG without using docker but wants to understand how apache serves it

[–] foremanguy92_@lemmy.ml 0 points 2 months ago (2 children)

Would like to understand it to customize it a bit and serve the service to a port instead of an URL for example

[–] foremanguy92_@lemmy.ml 2 points 2 months ago (1 children)

Would like to make it "pseudo-public" thank you anyway

[–] foremanguy92_@lemmy.ml 1 points 2 months ago

edited the original post sorry

14
submitted 2 months ago* (last edited 2 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

Hello, I want to install searxng without using Docker on my homeserver so I basically run the install script from the official documentation. Already tried to use the automatic script for setting up apache and it basically worked by exposing /searxng. But I would like to do something simpler as I like that apache would only expose localhost:80 (for example) to use a reverse proxy later to point to it. Just wondering how this configuration work and how I could do something that applicates to my use case :

# -*- coding: utf-8; mode: apache -*-

LoadModule ssl_module           /mod_ssl.so
LoadModule headers_module       /mod_headers.so
LoadModule proxy_module         /mod_proxy.so
LoadModule proxy_uwsgi_module   /mod_proxy_uwsgi.so
# LoadModule setenvif_module      /mod_setenvif.so
#
# SetEnvIf Request_URI /searxng dontlog
# CustomLog /dev/null combined env=dontlog

<Location /searxng>

    Require all granted
    Order deny,allow
    Deny from all
    # Allow from fd00::/8 192.168.0.0/16 fe80::/10 127.0.0.0/8 ::1
    Allow from all

    # add the trailing slash
    RedirectMatch  308 /searxng$ /searxng/

    ProxyPreserveHost On
    ProxyPass unix:/usr/local/searxng/run/socket|uwsgi://uwsgi-uds-searxng/

    # see flaskfix.py
    RequestHeader set X-Scheme %{REQUEST_SCHEME}s
    RequestHeader set X-Script-Name /searxng

    # see limiter.py
    RequestHeader set X-Real-IP %{REMOTE_ADDR}s
    RequestHeader append X-Forwarded-For %{REMOTE_ADDR}s

</Location>

# uWSGI serves the static files and in settings.yml we use::
#
#   ui:
#     static_use_hash: true
#
# Alias /searxng/static/ /usr/local/searxng/searxng-src/searx/static/

Any help here? Thank you very much

 

This is again a big win on the red team at least for me. They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

2
submitted 7 months ago* (last edited 7 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good "homelabing guy" I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing... I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don't really planned to.
So here's my thoughts and slowly I'm going to leave docker for more old-school way of hosting services. Don't get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it's not my case.

Maybe I'm doing something wrong but I let you talk about it in the comments, thx.

-1
submitted 7 months ago* (last edited 7 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

Hello, have setup my proxmox server since some weeks recently I found that LXC containers could be useful as it really separate all my services in differents containers. Since then I figured out to move my docker's services from a vm into several LXC containers. I ran into some issues, the first one is that a lot of projects run smoother in docker and doesn't really have a "normal" way of being package... The second thing is related to the first one, since they are not really well implemented into the OS how can I make the updates?
So I wonder how people are deploying their stuffs on LXC proxmox's containers?
Thanks for your help!

EDIT : Tried to install docker upon debian LXC but the performances were absolutely terrible...

-1
submitted 8 months ago* (last edited 8 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

I just install a complete new Drupal install in a Debian VM inside proxmox, everything works as intended, but I cannot add content to it(it gives me a 500 error).

Apache logs show me that the memory is exhausted, search online, no real answer, tried a lot of thing in PHP.ini, .htaccess… At first the VM had 1 vcpu and 1GB of RAM, not working, I’ve put the PHP memory limit to 1GB, give 8GB to the vm, and 4vcpu. Not working, just “loading” the 500 longer.

Error got into /var/log/apache2/error.log : PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 262144 bytes)) in Unknown on line 0. This is why I give the vm 8GB and change php memory_limit to 1GB but it did nothing…

Have no solutions, as now. If you have one please let me know! thanks 🙂

 

I just start using my homelab to host some new good services, and I want to know what is the approach of a docker setup, what is the best distro for? How to deploy them correctly? Basically I'm a real noob in this subject. Thank you

-1
submitted 8 months ago* (last edited 8 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

As the title what is the best file sharing service than can be self-hostable? Need encryption

EDIT : To be more precise I want something as an alternative to Wetransfer not Google Drive, something to get a link to dl files

 

Hey, I've setup a promox server and running some stuffs in it, and basically I want to know how to have alerts notifications that goes from one service to another, for example : I'm running NUT on proxmox shell, and I want to have an alert in Truenas (and in NextCloud running in Truenas) to say that the server will shutdown soon to the users that are actually using my cloud. Thank you 😄

view more: next ›