megaman

joined 2 years ago
 

Let's say I've got Nextcloud selfhosted in my basement and that it is accessible on the world wide web at nextcloud.kickassdomain.org. When someone puts in that URL, we'll have all the fun DNS-lookups trying to find the IP address to get them to my router, and my router forwards ports 80 and 443 to a machine running a reverse-proxy, and the reverse-proxy then sends it to a machine-and-port that Nextcloud is listening to.

When I do this on my phone next to that computer hosting Nextcloud, (I believe) what happens is that the data leaves and re-enters my home network as my router sends the data to the IP address it is looking for (which is itself). This would mean that instead of getting a couple hundred Mbps from the local wifi (or being etherneted in and getting even more), I'm limited by my ISPs upload speed of ~25Mbps.

Maybe that just isn't the case and I've got nothing to worry about...

What I want my network to do is to know that nothing has to leave the network at all and just use the local speeds. What I tried before was using a DNS re-write in Adguard such that anything going to my kickassdomain would instead go to the local IP address (so like nextcloud.kickassdomain.org -> 192.168.0.99). This seemed to cause a lot of problems when I then left the house because, I assume, the DNS info was cached and my phone would out in the world and try to connect to that IP and fail.

My final goal here is that I want to upload/download from my selfhosted applications (like nextcloud) without being limited by the relatively slow upload speed of the ISP.

Maybe the computer already figured all this out, though - it does seem like my router should know it's own IP and not bother sending things out into the world just for them to come back.

If it matters, my IP address is pretty stable, but more importantly it is unique to me (like every house in the neighborhood has their own IP).

Updates from testing: So everything does indeed just work without me needing to change how I already had it set up, presumably because the router did the hairpin NAT action folks are talking about here.

I tested it by installed iperf3 on the server then I used my phone (using the PingTools Network Utilities android app, only found on google play and not on f-droid) to connect. Here are the results:

  1. Phone to local IP address (192.168.0.xxx) - ~700 Mbits/second
  2. Phone to speedtest.mykickassdomain.org while still on the wifi - ~700 Mbits/second
  3. Phone on cellular to speedtest.mykickassdomain.org - ~4 Mbits/second
 

Yet another question about self-hosting email, but I haven't found the answer at least phrased in a way that makes sense with my question.

I've got ~15 GBs of old gmail data that I've already downloaded, and google is on my ass about "91% full" and we know I'm not about to pay them for storage (I'll sooner spend 100 hours trying to solve it myself before I pay them $3/month).

What I want is to have the same (or relatively close to the same) access and experience to find stuff in those old emails when they are stored on my hardware as I do when they are in my gmail. That is, I want to have a website and/or app that i search for emails from so-and-so, in some date-range, keywords. I don't actually want to send any emails from this server or receive anything to it (maybe I would want gmail to forward to it or something, but probably I'd just do another archive batch every year).

What I've tried so far, which is sort of working, is that I've set up docker-mailserver on my box, and that is working and accessible. I can connect to it via Thunderbird or K-9 mail. I also converted big email download from google, which was a .mbox, into maildir using mb2md (apt install mb2md on debian was nice). This gave me a directory with ~120k individual email files.

When I check this out in Thunderbird, I see all those emails (and they look like they have the right info) (as a side - I actually only moved 1k emails into the directory that docker-mailserver has access to, just for testing, and Thunderbird only sees that 1k then). I can do some searching on those.

When I open in K-9, it by default looks like it just pulls in 100 of them. I can pull in more or refresh it sort of thing. I don't normally use K-9, so I may just be missing how the functionality there is supposed to work.

I also just tried connecting to the mail server with Nextcloud Mail, which works in the sense that it connects but it (1) seems like it is struggling, and (2) is putting 'today' as the date for all the emails rather than when they actually came through. I don't really want to use Nextcloud Mail here...

So, I think my question here is now really around search and storage. In Thunderbird, I think that the way it works (I don't normally use Thunderbird much either) is that it downloads all the files locally, and then it will search them locally. In K-9 that appears to be the same, but with the caveat that it doesn't look like it really wants to download 120k emails locally (even if I can).

What I think I want to do, though, is have the search running on the server. Like I don't want to download 15GBs (and another 9 from gmail soon enough) to each client. I want it all on the server and just put in my search and the server do the query and give me a response.

docker-mailserver has a page for setting up Full-Text Search with Xapian, where it'll make all the indices and all that. I tinkered with this and think I got it set up. This is another sort of thing where I would want the search to be utilizing the server rather than client since the server is (hopefully) optimizing for some of this stuff.

Should I be using a different server for what I want here? I've poked around at different ones and am more than open to changing to something else that is more for what I need here.

For clients, should I be using Roundcube or something else? Will that actually help with this 'use the server to search' question? For mobile, is there any way to avoid downloading all the emails to the client?

Thanks for the help.