808
PNG has been updated for the first time in 22 years — new spec supports HDR and animation
(www.tomshardware.com)
This is a most excellent place for technology news and articles.
As you can see it's irrelevant apparently. If it's AI generated it will be downvoted.
It's not irrelevant, it's that you don't actually know if it's true or not, so it's not a valuable contribution.
If you started your comment by saying "This is something I completely made up and may or may not be correct" and then posted the same thing, you should expect the same result.
I did check some of the references.
What I dont understand is why you would perceive this content as more trustworthy if I didn't say it's AI.
Nobody should trust blindly some anonymous comment on a forum. I have to check what the AI blurbs out but you can just gobble the comment of some stranger without exercising yourself some critical thinking?
As long as I'm transparent on the source and especially since I did check some of it to be sure it's not some kind of hallucination...
There shouldn't be any difference of trust between some random comment on a social network and what some AI model thinks on a subject.
Also it's not like this is some important topic with societal implications. It's just a technical question that I had (and still doesn't) that doesn't mandate researching. None of my work depends on that lib. So before my comment there was no information on compatibility. Now there is but you have to look at it critically and decide if you want to verify or trust it.
That's why I regret this kind of stubborn downvoting where people just assume the worse instead of checking the actual data.
Sometime I really wonder if I'm the only one supposed to check my data? Aren't everybody here capable of verifying the AI output if they think it's worth the time and effort?
Basically, downvoting here is choosing "no information" rather than "information I have to verify because it's AI generated".
Edit: Also I could have just summarized the AI output myself and not mention AI. What then? Would you have checked the accuracy of that data? Critical thinking is not something you use "sometimes" or just "on some comments".
So why "research" it with AI in the first place, if you don't care about the results and don't even think it's worth researching? This is legitimately absurd to read.
You realize that if we wanted to see an ~~AI~~ LLM response, we'd ask an ~~AI~~ LLM ourselves. What you're doing is akin to :
I understand that. It's the downvoting of the clearly marked as AI LLM response. Is it detrimental to the conversation here to have that? Is it better to share nothing rather than this LLM output?
Was this thread better without it?
Is complete ignorance of the PNG compatibility preferable to reading this AI output and pondering how true is it?
Now I think this conversation is getting just rude for no reason. I think the AI output was definitely not the "I'm lucky" result of a Google search and the fact that you choose that metaphor is in bad faith.
I'll spell out for you why I feel these two things are the same. You're welcome to disagree, but maybe it will give you some pointers as to why a lot of people are annoyed by these LLM copy-pasta.
First, a note for your ego : Remember, we don't know you! You might have read through the whole thing that the LLM generated, and then cross-referenced the sources it gave you and then found some more, and you established the veracity of what it told you. OR you might have not bothered and just copy-pasted it for the clout, I guess, which is what the majority of LLM users do when publishing these answers. Ask for some explanations on how to solve a problem to a student who let a LLM give them the answer to an assignment instead of doing the work, if you need proof. 9/10 won't be able to, because they didn't bother to understand it - and I'm being generous in my statistics, in my experience.
There's also a lot of research that hints that the long term effect of using this tool in that way are deleterious to your critical thinking skills.
We don't know you, so, chances are you're in the lot of the majority, as far as we're concerned.
Then, given that you probably didn't put much effort into this text (as far as we can tell), there is an imbalance of effort required for us to look through it critically. Why the fuck would we put in the effort, if you most likely were not keen in putting so much in yourself? That's kinda disrespectful, and egotistical. And also why I feel I am justified to assume that ctrl+c/ctrl+v an LLM output directly is tantamount to copy pasting a list of link from google. If you went through the trouble of validating the LLM output, how about just writing with your own word what you just realized / learned / validated? You can even dictate with tons of FOSS software nowadays if you're unable to type!
So that's what I have for now, food for your thoughts I hope. I'm sure I could find more reasons, but I'm going to go do something fun instead.
Yes.
I, and I assume most people, go into the comments on Lemmy to interact with other people. If I wanted to fucking chit-chat with an LLM (why you'd want to do that, I can't fathom), I'd go do that. We all have access to LLMs if we wish to have bullshit with a veneer of eloquency spouted at us.
Are you really asking why advertising that "the following comment may be hallucinated" nets you more downvotes than just omitting that fact?
You're literally telling people "hey, this is a low effort comment" and acting flabbergasted that it gets you downvotes.