foremanguy92_

joined 2 years ago
[–] foremanguy92_@lemmy.ml 1 points 22 hours ago

With firewall you could do it pretty properly

[–] foremanguy92_@lemmy.ml 9 points 1 week ago

If it's purely static without the need to generate generate easily new page, simply use a web server.

[–] foremanguy92_@lemmy.ml 1 points 2 weeks ago

No problem have a great day

[–] foremanguy92_@lemmy.ml 0 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Firstly the best way would normally be to have a separate switch and router.

The router only having 2 ports WAN and LAN. And then get a great MANAGED switch for your lan.

For your router basically any old x86 PC loaded with OPNSense would be great.
The network card you will buy depends of your internet speed.
(And try to find Intel chip network card)

Next for the switch, definitely get a managed switch (you won't regret it).
The number of ports depends on your needs. Basically a 8 ports could be just enough or maybe very too little.
That really depends.
For the switch port speed it again really depends. Do not get 100Mb switches at least. But the sky is the limit.
1G is plenty for a lot of people. But 2.5G could be good too. (In my opinion 10G is overkill for most of the people)
The problem is that switches prices are exponential with the speed. You can get really good 1G for cheap. More difficult with 2.5G and impossible for 10G.
And lastly PoE or not PoE that's the question. I would say a huge NO (except specific use cases). If you got 20 cameras, 38 motion sensors and 76 APs, YES a PoE switch is a good idea.
If you have a small amount of PoE devices, simply buy a cheap unmanaged PoE switch.
If you only have one or two of them, just buy injector.

If you have any questions concerning a brand or anything else feel free to ask

EDIT : formatting

[–] foremanguy92_@lemmy.ml 5 points 2 weeks ago

Netflix trend

[–] foremanguy92_@lemmy.ml 1 points 1 month ago

Don't think so, but I will try to check it

[–] foremanguy92_@lemmy.ml 1 points 1 month ago (2 children)

I want to protect my home services, so when accessing my domain it goes trough the vps and you only knows its IP (a datacenter IP) but for my friends and family I don't need this protection so they are accessing my home with a VPN connection and btw they are using the vps to make requests and so protect their privacy.

The simple solution (since my services are publicly available), would be to route all traffic coming from my friends trough my home and then through the vps. But I don't like this idea since it would add a lot of latency and useless traffic since the client is already going trough my home...

So my question is how could I route directly to the client the local services and let go through the vps the rest of the traffic?

[–] foremanguy92_@lemmy.ml 1 points 1 month ago

Hum maybe this is a good solution, gonna dig a bit into

[–] foremanguy92_@lemmy.ml 1 points 1 month ago (4 children)

Nah it's not what I want to do.

The request from client for local services goes trough the first VPN and are resolved in my home and then comeback.

The request from client to outside services goes trought my home with the first VPN, are resolved here and then go to the internet trough the second VPN and then comeback to the client

[–] foremanguy92_@lemmy.ml 2 points 1 month ago (1 children)

This is not what I exactly want to do. Requests to my home services are protected by not going directly to my home and rather going trough VPS, but since I know my friends I can let them go directly to my home without at any time go trough the VPS (expect to make up the out request).

[–] foremanguy92_@lemmy.ml 1 points 1 month ago (6 children)

Edited the post with a diagramm

[–] foremanguy92_@lemmy.ml 1 points 1 month ago

Right will try to make you a diagram, but I dont think tailscale would be a good solution...

 

Hey fellow selfhosters! Hope you're doing well, today I would like to have some help to know how I could make this project a reality. So I would like to give to friends and family a VPN access to my homelab (probably with Wireguard).

I also have a VPS in the cloud and I can VPN to it to anonimize outgoing connections.

So basically in the case that a friend ask a local service I want the request to come to my home with his VPN connection and then comeback directly from my home.

In the case that a friend request google[dot]com I want the request to come to my house and then go trough the VPS to make the request from it and not from my home. Then comeback from google to the VPS to my home to the client.

The principal issue I have is how can I route my services directly trough my home without going into the regular WWW, but make all other requests to go trough the VPS and to the WWW

If you need some more explanations or infos, feel free to ask.

PS : I also self host PiHole so all the DNS requests should go trough it (and maybe I could use it to route requests where I want by tweaking my domain request to local IPs?)

diagramm of the network

 

I found about Usenet and the sort of "hype" relative to it and I wonder some stuff.

Maybe I'm a bit old school but is this process not against the spirit of piracy,

  • be generous to share
  • fight against censorship and DCMA takedown
  • work in a decentralized way

Just wondered it, if someone wants to give his opinion

14
submitted 4 months ago* (last edited 4 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

Hello, I want to install searxng without using Docker on my homeserver so I basically run the install script from the official documentation. Already tried to use the automatic script for setting up apache and it basically worked by exposing /searxng. But I would like to do something simpler as I like that apache would only expose localhost:80 (for example) to use a reverse proxy later to point to it. Just wondering how this configuration work and how I could do something that applicates to my use case :

# -*- coding: utf-8; mode: apache -*-

LoadModule ssl_module           /mod_ssl.so
LoadModule headers_module       /mod_headers.so
LoadModule proxy_module         /mod_proxy.so
LoadModule proxy_uwsgi_module   /mod_proxy_uwsgi.so
# LoadModule setenvif_module      /mod_setenvif.so
#
# SetEnvIf Request_URI /searxng dontlog
# CustomLog /dev/null combined env=dontlog

<Location /searxng>

    Require all granted
    Order deny,allow
    Deny from all
    # Allow from fd00::/8 192.168.0.0/16 fe80::/10 127.0.0.0/8 ::1
    Allow from all

    # add the trailing slash
    RedirectMatch  308 /searxng$ /searxng/

    ProxyPreserveHost On
    ProxyPass unix:/usr/local/searxng/run/socket|uwsgi://uwsgi-uds-searxng/

    # see flaskfix.py
    RequestHeader set X-Scheme %{REQUEST_SCHEME}s
    RequestHeader set X-Script-Name /searxng

    # see limiter.py
    RequestHeader set X-Real-IP %{REMOTE_ADDR}s
    RequestHeader append X-Forwarded-For %{REMOTE_ADDR}s

</Location>

# uWSGI serves the static files and in settings.yml we use::
#
#   ui:
#     static_use_hash: true
#
# Alias /searxng/static/ /usr/local/searxng/searxng-src/searx/static/

Any help here? Thank you very much

 

This is again a big win on the red team at least for me. They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

2
submitted 9 months ago* (last edited 9 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good "homelabing guy" I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing... I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don't really planned to.
So here's my thoughts and slowly I'm going to leave docker for more old-school way of hosting services. Don't get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it's not my case.

Maybe I'm doing something wrong but I let you talk about it in the comments, thx.

-1
submitted 9 months ago* (last edited 9 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

Hello, have setup my proxmox server since some weeks recently I found that LXC containers could be useful as it really separate all my services in differents containers. Since then I figured out to move my docker's services from a vm into several LXC containers. I ran into some issues, the first one is that a lot of projects run smoother in docker and doesn't really have a "normal" way of being package... The second thing is related to the first one, since they are not really well implemented into the OS how can I make the updates?
So I wonder how people are deploying their stuffs on LXC proxmox's containers?
Thanks for your help!

EDIT : Tried to install docker upon debian LXC but the performances were absolutely terrible...

-1
submitted 10 months ago* (last edited 10 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

I just install a complete new Drupal install in a Debian VM inside proxmox, everything works as intended, but I cannot add content to it(it gives me a 500 error).

Apache logs show me that the memory is exhausted, search online, no real answer, tried a lot of thing in PHP.ini, .htaccess… At first the VM had 1 vcpu and 1GB of RAM, not working, I’ve put the PHP memory limit to 1GB, give 8GB to the vm, and 4vcpu. Not working, just “loading” the 500 longer.

Error got into /var/log/apache2/error.log : PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 262144 bytes)) in Unknown on line 0. This is why I give the vm 8GB and change php memory_limit to 1GB but it did nothing…

Have no solutions, as now. If you have one please let me know! thanks 🙂

 

I just start using my homelab to host some new good services, and I want to know what is the approach of a docker setup, what is the best distro for? How to deploy them correctly? Basically I'm a real noob in this subject. Thank you

-1
submitted 10 months ago* (last edited 10 months ago) by foremanguy92_@lemmy.ml to c/selfhosted@lemmy.world
 

As the title what is the best file sharing service than can be self-hostable? Need encryption

EDIT : To be more precise I want something as an alternative to Wetransfer not Google Drive, something to get a link to dl files

 

Hey, I've setup a promox server and running some stuffs in it, and basically I want to know how to have alerts notifications that goes from one service to another, for example : I'm running NUT on proxmox shell, and I want to have an alert in Truenas (and in NextCloud running in Truenas) to say that the server will shutdown soon to the users that are actually using my cloud. Thank you 😄

view more: next ›