thebestaquaman

joined 2 years ago
[–] thebestaquaman@lemmy.world 10 points 3 hours ago (1 children)

It's not about the pictures. I honestly think this is just an excuse to get rid of journalists, and a method for hammering home the point that "journalists bad. news fake." I'm not even sure he's actually seen the pictures I question.

With that said, I wouldn't put it past him to actually be so vain that it is really about the pictures. However I think it's more likely that he would have done something like this anyway, and just picked an excuse.

[–] thebestaquaman@lemmy.world 6 points 4 days ago

Viking/Danish

Norse. And it's more similar to modern Icelandic than anything else, perhaps most closely followed by (distant second place) Norwegian.

[–] thebestaquaman@lemmy.world 2 points 5 days ago

You can fine-grain nr. 2 even more: You can give access to e.g. modify files only in a certain sub-tree, or run only specific commands with only specific options.

A restrictive yet quite safe approach is to only permit e.g. git add, git commit, and only allow changes to files under the VC. That effectively prevents any irreversible damage, without requiring you to manually approve all the time.

[–] thebestaquaman@lemmy.world 0 points 5 days ago

You're absolutely right. I mostly run a pretty simple local model though, so it's not like it's very expensive either.

[–] thebestaquaman@lemmy.world 2 points 5 days ago

Saying that it can serve the same purpose does not mean that I mean the two are equivalent in every aspect.

Just based on how you've responded so far it seems like you're wilfully misinterpreting how I actually use an LLM for this purpose, especially with responses referring to LLMs causing people to commit suicide and offloading decision making or the thought process itself to an LLM.

[–] thebestaquaman@lemmy.world 2 points 5 days ago (2 children)

It really seems like you're wilfully misinterpreting what I'm writing.

[–] thebestaquaman@lemmy.world 1 points 5 days ago (4 children)

That is correct. However, an LLM and a rubber duck have in common that they are inanimate objects that I can use as targets when formulating my thoughts and ideas. The LLM can also respond to things like "what part of that was unclear", to help keep my thoughts flowing. NOTE: The point of asking an LLM "what part of that was unclear" is NOT that it has a qualified answer, but rather that it's a completely unqualified prompt to explain a part of the process more thoroughly.

This is a very well established process: Whether you use an actual rubber duck, your dog, writing a blog post / personal memo (I do the last quite often) or explaining your problem to a friend that's not at all in the field. The point is to have some kind of process that helps you keep your thoughts flowing and touching in on topics you might not think are crucial, thus helping you find a solution. The toddler that answers every explanation with "why?" can be ideal for this, and an LLM can emulate it quite well in a workplace environment.

[–] thebestaquaman@lemmy.world 3 points 5 days ago

Yes, absolutely, but there's a huge span from completely removing the box to having "just" a chatbot.

For example, at my company, we've set up an agent that can work with certain design-files that engineers typically work with through a rather complex GUI. We've built a bunch of endpoints that ensures the agent can only make valid changes to the files, and that it can never delete or modify anything without approval. This saves people a bunch of time, because they can make the agent do "batch jobs" that take maybe 10 min in about 10 s. It's not possible for this agent to mess up our database or anything like that, because all interactions it has with anything are through endpoints where we verify that files, access permissions, change logs, etc. are valid.

[–] thebestaquaman@lemmy.world 1 points 5 days ago (6 children)

I think you've misunderstood the purpose of a rubber duck: The point is that by formulating your problems and ideas, either out loud or in writing, you can better activate your own problem solving skills. This is a very well established method for reflecting on and solving problems when you're stuck, it's a concept far older than chatbots, because the point isn't the response you get, but the process of formulating your own thoughts in the first place.

[–] thebestaquaman@lemmy.world 4 points 5 days ago (2 children)

Nah, you can run it in a box and limit its ability to interact with anything outside the box to certain white-listed endpoints. Depending on what you want to achieve, that can be more than safe enough.

[–] thebestaquaman@lemmy.world 1 points 5 days ago (10 children)

Meh, they work well enough if you treat them as a rubber duck that responds. I've had an actual rubber duck on my desk for some years, but I've found LLM's taking over its role lately.

I don't use them to actually generate code. I use them as a place where I can write down my thoughts. When the LLM responds, it has likely "misunderstood" some aspect of my idea, and by reformulating myself and explaining how it works I can help myself think through what I'm doing. Previously I would argue with the rubber duck, but I have to admit that the LLM is actually slightly better for the same purpose.

[–] thebestaquaman@lemmy.world 17 points 5 days ago (10 children)

I mean, there's a good reason the first rules of firearm safety are to always treat a weapon as loaded, and to never direct the weapon at something you aren't prepared to destroy. The key point being that you never know when some freak accident can happen with a loose pin, bad ammo, a broken spring, or just a person tripping and shaking the gun a bit too hard.

A gun should never go off by itself. You still treat it as if it can, because in the real world freak accidents happen.

 

Normally, I use YouTube very little (watch a couple videos a month). However, I've been in bed with an injury for some time now, which has led me to watch quite a bit of YouTube. The thing is, I subscribe to a small handfull of channels that I enjoy content from, but after a relatively short time I had watched pretty much all the new content from those channels.

Now, I would expect that the YouTube algorithm, which is supposedly designed by competent people to get me to stick around, would be able to suggest some decent content to me based on my subscriptions. However, the past week, I've opened YouTube only to find the same old videos being suggested over and over. Even worse: Whenever there's something interesting-looking from a channel I don't recognise, it always turns out to be some shitty AI voice over some generic animations or footage.

I know for a fact that thousands of hours of content are created on YouTube daily, but it genuinely feels like there are maybe five creators out there that are making anything worth watching. It's either that, or the YouTube algorithm is just complete crap at suggesting creators that are in any way similar to what I'm already subscribing to.

What's going on here? Why does it seem like there's no real content out there?

As a "funny" side note: What's with the "aggressively American" AI narrator-voice? I've heard it before, but thought it was some dude until I realised it's the same voice in a bunch of unrelated videos. It reminds me of the Discovery-channel "action-narrator"-voice from back in the day, but now it's showing up in all kinds of crap videos.

 

Inspired by the linked XKCD. Using 60% instead of 50% because that's an easy filter to apply on rottentomatoes.

I'll go first: I think "Sherlock Holmes: A game of Shadows" was awesome, from the plot to the characters ,and especially how they used screen-play to highlight how Sherlocks head works in these absurd ways.

view more: next ›