sugar_in_your_tea

joined 2 years ago

I'm surprised about the module lookup thing, since I assumed it was just syntax sugar to do from ... import .... We do the from syntax almost everywhere, but I've been replacing huge import blocks with a module import (e.g. constants) just to clean up the imports a bit and git conflicts.

Looks like I'll need to keep this in mind until we upgrade to 3.13.

How about parents just do their job and make sure their kids aren't accessing stuff they shouldn't? I'm a parent, and I'm already doing that, I don't need the government to violate my privacy in order to be a decent parent...

Yeah, I'm liking it so far, but I'm still very much in the testing phase, I don't have any "real" data in it yet.

[–] sugar_in_your_tea@sh.itjust.works 0 points 1 day ago* (last edited 1 day ago)

“find your dream job”

Which is what I did: working in tech. Ever since I was a kid, I wanted to build things, but I also wanted to support a family. At first I wanted to be a carpenter, but the likelihood of making good money with that was small, so I learned to build websites and decided to make a career out of it (I actually thought about patent law, but realized SW patents don't build, but prevent things from being built).

So yeah, I'm basically doing exactly what I want, and I've avoided working for companies I hate.

That said, I've been doing the same thing for many years now, so a change of pace would be welcome, but I still want to build things. Unfortunately, LLMs are trying to take the part of like (actually building things) and is trying to replace it with designing things. I guess I could pivot to that, but seeing something built doesn't have the same satisfaction for me.

If I had enough to retire, I'd probably start an indie game studio, and I'd hire a lead designer and work on the fun algorithms myself. So my main complaint is what I work on, so I could probably be happier with a different company, but there's no perfect company and I like my current team, so I'm not particularly interested in leaving.

[–] sugar_in_your_tea@sh.itjust.works 12 points 2 days ago (1 children)

Yeah, and it's presumptious of them to access the WhatsApp account I don't have...

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 days ago* (last edited 2 days ago) (1 children)

Right, and it depends on what "quite off target" means. Are we talking about greens becoming purples? Or dark greens becoming bright greens? If the image is still mostly recognizable, just with poor saturation or contrast or whatever, I think it's acceptable for older software.

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 days ago (3 children)

it would work

And that's probably enough. I don't know enough about HDR to know if it would look anything like the artist imagined, but as long as it's close enough, it's fine if it's not optimal. Having things completely break is far less than ideal.

Isn't that what 802.1x is for? If you really want to lock down your network, there are options.

A certain amount of skepticism is healthy, but it's also quite common for people to go overboard and completely avoid a useful thing just because some rich idiot is pushing it. I've seen a lot of misinformation here on Lemmy about LLMs because people hate the environment its in (layoffs in the name of replacing people with "AI"), but they completely ignore the merit the tech has (great at summarizing and providing decent results from vague queries). If used properly, LLMs can be quite useful, but people hyper-focus on the negatives, probably because they hate the marketing material and the exceptional cases the news is great at shining a spotlight on.

I also am skeptical about LLMs usefulness, but I also find them useful in some narrow use-cases I have at work. It's not going to actually replace any of my coworkers anytime soon, but it does help me be a bit more productive since it's yet another option to get me unstuck when I hit a wall.

Just because there's something bad about something doesn't make the tech useless. If something gets a ton of funding, there's probably some merit to it, so turn your skepticism into a healthy quest for truth and maybe you'll figure out how to benefit from it.

For example, the hype around cryptocurrency makes it easy to knee-jerk reject the technology outright, because it looks like it's merely a tool to scam people out of their money. That is partially true, but it's also a tool to make anonymous transactions feasible. Yes, there are scammers out there pushing worthless coins in a pump and dump scheme, but there are also privacy-focused coins (Monero, Z-Cash, etc) that are being used today to help fund activists operating under repressive regimes. It's also used by people doing illegal things, but hey, so is cash, and privacy coins are basically easier to use cash. We probably wouldn't have had those w/o Bitcoin, though they use very different technology under the hood to achieve their aims. Maybe they're not for you, but they do help people.

Instead of focusing on the bad of a new technology, more people should focus on the good, and then weigh for themselves whether the good is worth the bad. I think in many cases it is, but only if people are sufficiently informed about how to use them to their advantage.

Can’t search for something on the net anymore without being served f-tier LLM-produced garbage.

I don't see a material difference vs the f-tier human-produced garbage we had before. Garbage content will always exist, which is why it's important to learn to how to filter it.

This is true of LLMs as well: they can and do produce garbage, but they can and are useful alternatives to existing tech. I don't use them exclusively, but as an alternative when traditional search or whatever isn't working, they're quite useful. They provide rough summaries about things that I can usually easily verify, and they produce a bunch of key words that can help refine my future searches. I use them a handful of times each week and spend more time using traditional search and reading full articles, but I do find LLMs to be a useful tool in my toolbox.

I also am frustrated by energy use, but it's one of those things that will get better over time as the LLM market matures from a gold rush into established businesses that need to actually make money. The same happens w/ pretty much every new thing in tech, there's a ton of waste until the product finds its legs and then becomes a lot more efficient.

VR is still cool and will probably always be cool, but I doubt it'll never be mainstream. 3D was just awkward, and they really just wanted VR but the tech wasn't there yet.

I own neither, yet I've been considering VR for a few years now, just waiting for more headsets to have proper Linux support before I get one.

Likewise, I'm not paying for LLMs, but I do use the ones my workplace provides. They're useful sometimes, and it's nice to have them as an option when I hit a wall or something. I think they're interesting and useful, but not nearly as powerful as the big corporations want you to think.

 

Current setup:

  • one giant docker compose file
  • Caddy TLS trunking
  • only exposed port is Caddy

I've been trying out podman, and I got a new service running (seafile), and I did it via podman generate kube so I can run it w/ podman kube play. My understanding is that the "podman way" is to use quadlets, which means container, network, etc files managed by systemd, so I tried out podlet podman kube play to generate a systemd-compatible file, but it just spat out a .kube file.

Since I'm just starting out, it wouldn't be a ton of work to convert to separate unit files, or I can continue with the .kube file way. I'm just not sure which to do.

At the end of this process, here's what I'd like in the end:

  • Caddy is the only exposed port - could block w/ firewall, but it would be nice if they worked over a hidden network
  • each service works as its own unit, so I can reuse ports and whatnot - I may move services across devices eventually, and I'd rather not have to remember custom ports and instead use host names
  • automatically update images - shouldn't change the tag, just grab the latest from that tag

Is there a good reason to prefer .kube over .container et al or vice versa? Which is the "preferred" way to do this? Both are documented on the same "quadlet" doc page, which just describes the acceptable formats. I don't think I want kubernetes anytime soon, so the only reason I went that way is because it looked similar to compose.yml and I saw a guide for it, but I'm willing to put in some work to port from that if needed (and the docs for the kube yaml file kinda sucks). I just want a way to ship around a few files so moving a service to a new device is easy. I'll only really have like 3-4 devices (NAS, VPS, and maybe an RPi or two), and I currently only have one (NAS).

Also, is there a customary place to stick stuff like config files? I'm currently using my user's home directory, but that's not great long-term. I'll rarely need to touch these, so I guess I could stick them on my NAS mount (currently /srv/nas/) next to the data (/srv/nas//). But if there's a standard place to stick this, I'd prefer to do that.

Anyway, just looking for an opinionated workflow to follow here. I could keep going with the kube yaml file route, or I could switch to the .container route, I don't mind either way since I'm still early in the process. I'm currently thinking of porting to the .container method to try it out, but I don't know if that's the "right" way or if ".kube` with a yaml config is the "right" way.

 

Apparently US bandwidth was reduced to 1TB for their base plan, though they have 20TB for the same plan in Europe. I don't use much bandwidth right now, but I could need more in the future depending on how I do backups and whatnot.

So I'm shopping around in case I need to make a switch. Here's what I use it for:

  • VPN to get around CGNAT - so all traffic for my internal services goes through it
  • HAProxy - forwards traffic to my various services
  • small test servers - very low requirements, basically just STUN servers
  • low traffic blog

Hard requirements:

  • custom ISO, or at least openSUSE support
  • inexpensive - shooting for ~$5/month, I don't need much
  • decent bandwidth (bare minimum 50mbps, ideally 1gbps+), with high-ish caps - I won't use much data most of the time (handful of GB), but occasionally might use 2-5TB

Nice to have:

  • unmetered/generous bandwidth - would like to run a Tor relay
  • inexpensive storage - need to put my offsite backups somewhere
  • API - I'm a nerd and like automating things :)
  • location near me - I'm in the US, so anywhere in NA works

Not needed:

  • fast processors
  • lots of RAM
  • loose policies around torrenting and processing (no crypto or piracy here)
  • support features, recipes, etc - I can figure stuff out on my own

I'll probably stick with Hetzner for now because:

  • pricing is still fair (transfer is in line with competitors)
  • can probably move my server to Germany w/o major issues for more bandwidth
  • they hit all of the other requirements, nice to haves, and many unneeded features

Anyway, thoughts? The bandwidth change pisses me off, so let me know if there's a better alternative.

view more: next ›