58
Lawyer caught using AI-generated false citations in court case penalised in Australian first
(www.theguardian.com)
A place to discuss Australia and important Australian issues.
If you're posting anything related to:
If you're posting Australian News (not opinion or discussion pieces) post it to Australian News
This community is run under the rules of aussie.zone. In addition to those rules:
Congratulations to @Tau@aussie.zone who had the most upvoted submission to our banner photo competition
Be sure to check out and subscribe to our related communities on aussie.zone:
https://aussie.zone/communities
Since Kbin doesn't show Lemmy Moderators, I'll list them here. Also note that Kbin does not distinguish moderator comments.
Additionally, we have our instance admins: @lodion@aussie.zone and @Nath@aussie.zone
I've been using an AI bot more and more in my own consultancy.
I don't use it to draft anything to be issued to a client or regulator, but for internal notes it can be helpful sometimes.
It's kind of surprising how often it just confidently spews out sentences which seem plausible but are completely incorrect.
Legislation seems to be an area in which it's particularly over confident.
The penalties here seem harsh but submitting something to a court that is false and misleading is a big deal, even if it was inadvertent.
It is useful for Lorem Ipsum text and that is all.
Honestly, if you are submitting anything using AI generated content, you may as just put Lorem Ipsum text instead. That way you are not wasting ridiculous amounts of electricity and potable water.
https://en.wikipedia.org/wiki/Lorem_ipsum
The energy is indeed wasteful but cooling water is recovered and reused . Do you know for a fact that much of it evaporates?
I've tried to explain in other comments but basically, I don't "submit" anything using AI generated content.
It's a helpful support which can sometimes save time.
These things were trained on the 21st century internet. I wouldn't trust a single fcking thing they say. It's a dunning- kruger machine
I don't trust anything they say.
To me, it's not surprising at all. It's trained to talk like its training data talks, how people talk. Very loosely speaking, it's a "common sense" generator, and if there are topics that you're experienced with and you look at a site like reddit talking about it, you soon realise how normal it is for people to be confidently incorrect.
And on that note, it's been seriously worrying to me how people seem to trust and anthropomorphise computers. It's been a problem since at least the '60s but the advent of Artificial so-called Intelligence has revealed how dangerous it is.
Unless a bot is trained with curated data (like some medical imaging ones, for example), it shouldn't be believed. And even then it shouldn't be fully trusted.
I agree for the most part.
"Surprising" is perhaps the wrong word. If you have even a vague understanding of how these work, then nothing is really surprising. However, a bot day to day and learning how to integrate it into your workflow, you get used to a certain level of quality, but occasionally (regularly?) run into something that doesn't meet your expectations.
I agree that the way that some people are interacting with these LLMs is... odd. However, people are engaging in so many odd behaviors I have to say if they're not harming anyone then have at it.
Don't gell-mann yourself.
If it spits out plausible looking but incorrect things you notice with high frequency, how much do you not notice?
I'm just not using Gen AI that way.
Like I don't ask it to provide me with technical details, rather I provide details and ask it to re-phrase.
I think the penalties are too harsh at all. This person is suppose to be a trained professional. Their right to practice law is based on their skills and their knowledge. It's a high barrier that prevents most people from taking that job. And in this case, the person outsourced a key part of their job to a LLM, and did not verify the result. Effectively they got someone (something) unqualified to do the job for them, and passed it off as their own work. So the high barrier which was meant to ensure high-quality work was breached. It makes sense to strip the person of their right to do that kind of work. (The suspension is temporary, which is fair too. But these kinds of breaches trust and reliability are not something people should just accept.)
I'd say of any high paid profession, the legal trade is the most likely to be decimated by 'AI' and LLM's.
If you fed every case and ruling, law and statute into an LLM, removed it's "yes, and'ing and had someone who knew how to write a effective prompt you could answer many, many legal questions and save a lot of time searching for precedence.
Obviously someone will have to accept liability if poor advice is given but I can see some hotshot lawyer taking the risk if it means he can handle 1000's of cases at once with a few 'prompt engineers'.
That's not my experience, with the current state of the tech anyway.
There are models on hugging face tuned exactly as you describe.
Sure at some point in the future they will be helpful to draft legal submissions, but that's not really what lawyers "do" in the same way accountants don't spend their days doing math.
If they take the risk then they should suffer the risks should it fail. Disbar them
Totally. However, I think so long as you manually verify, it should really be fine. It takes ages to find a case that establishes precedence, but confirming the details of the case once you've found it is relatively quick.
If you skip the manual verification, yeah you deserve what you get.
The lawyer is still allowed to practice but only as an employee, under supervision and checked quarterly.
You seem to have a very high expectation of professionalism.
Trained professionals who are supposed to have skills and knowledge and experience make mistakes all the time, sometimes through ineptitude, but also through laziness.
Whether it's Doctors, lawyers, accountants, architects, any profession really. In many or most cases the client doesn't suffer real harm, or if they do the costs of litigation would be higher than the compensation.
A referral to a professional body is usually not very serious. Doctors are referred to the board for malpractice all the time.
I'm a tax consultant. We're regulated by the Tax Practitioners Board. I find it extraordinarily unlikely that they would take someone's license over a submission to the ATO that relied on false cases. Basically they only take action in cases where there is little or no doubt that the practitioner sought to intentionally mislead the tax office.
So, you personally might not think the penalties are harsh, but I can assure you that restricting someone's license to practice, whatever their profession, is a measure usually reserved for fraudulent behavior.