58
Lawyer caught using AI-generated false citations in court case penalised in Australian first
(www.theguardian.com)
A place to discuss Australia and important Australian issues.
If you're posting anything related to:
If you're posting Australian News (not opinion or discussion pieces) post it to Australian News
This community is run under the rules of aussie.zone. In addition to those rules:
Congratulations to @Tau@aussie.zone who had the most upvoted submission to our banner photo competition
Be sure to check out and subscribe to our related communities on aussie.zone:
https://aussie.zone/communities
Since Kbin doesn't show Lemmy Moderators, I'll list them here. Also note that Kbin does not distinguish moderator comments.
Additionally, we have our instance admins: @lodion@aussie.zone and @Nath@aussie.zone
To me, it's not surprising at all. It's trained to talk like its training data talks, how people talk. Very loosely speaking, it's a "common sense" generator, and if there are topics that you're experienced with and you look at a site like reddit talking about it, you soon realise how normal it is for people to be confidently incorrect.
And on that note, it's been seriously worrying to me how people seem to trust and anthropomorphise computers. It's been a problem since at least the '60s but the advent of Artificial so-called Intelligence has revealed how dangerous it is.
Unless a bot is trained with curated data (like some medical imaging ones, for example), it shouldn't be believed. And even then it shouldn't be fully trusted.
I agree for the most part.
"Surprising" is perhaps the wrong word. If you have even a vague understanding of how these work, then nothing is really surprising. However, a bot day to day and learning how to integrate it into your workflow, you get used to a certain level of quality, but occasionally (regularly?) run into something that doesn't meet your expectations.
I agree that the way that some people are interacting with these LLMs is... odd. However, people are engaging in so many odd behaviors I have to say if they're not harming anyone then have at it.
Don't gell-mann yourself.
If it spits out plausible looking but incorrect things you notice with high frequency, how much do you not notice?
I'm just not using Gen AI that way.
Like I don't ask it to provide me with technical details, rather I provide details and ask it to re-phrase.