58
Lawyer caught using AI-generated false citations in court case penalised in Australian first
(www.theguardian.com)
A place to discuss Australia and important Australian issues.
If you're posting anything related to:
If you're posting Australian News (not opinion or discussion pieces) post it to Australian News
This community is run under the rules of aussie.zone. In addition to those rules:
Congratulations to @Tau@aussie.zone who had the most upvoted submission to our banner photo competition
Be sure to check out and subscribe to our related communities on aussie.zone:
https://aussie.zone/communities
Since Kbin doesn't show Lemmy Moderators, I'll list them here. Also note that Kbin does not distinguish moderator comments.
Additionally, we have our instance admins: @lodion@aussie.zone and @Nath@aussie.zone
I've been using an AI bot more and more in my own consultancy.
I don't use it to draft anything to be issued to a client or regulator, but for internal notes it can be helpful sometimes.
It's kind of surprising how often it just confidently spews out sentences which seem plausible but are completely incorrect.
Legislation seems to be an area in which it's particularly over confident.
The penalties here seem harsh but submitting something to a court that is false and misleading is a big deal, even if it was inadvertent.
I think the penalties are too harsh at all. This person is suppose to be a trained professional. Their right to practice law is based on their skills and their knowledge. It's a high barrier that prevents most people from taking that job. And in this case, the person outsourced a key part of their job to a LLM, and did not verify the result. Effectively they got someone (something) unqualified to do the job for them, and passed it off as their own work. So the high barrier which was meant to ensure high-quality work was breached. It makes sense to strip the person of their right to do that kind of work. (The suspension is temporary, which is fair too. But these kinds of breaches trust and reliability are not something people should just accept.)
I'd say of any high paid profession, the legal trade is the most likely to be decimated by 'AI' and LLM's.
If you fed every case and ruling, law and statute into an LLM, removed it's "yes, and'ing and had someone who knew how to write a effective prompt you could answer many, many legal questions and save a lot of time searching for precedence.
Obviously someone will have to accept liability if poor advice is given but I can see some hotshot lawyer taking the risk if it means he can handle 1000's of cases at once with a few 'prompt engineers'.
That's not my experience, with the current state of the tech anyway.
There are models on hugging face tuned exactly as you describe.
Sure at some point in the future they will be helpful to draft legal submissions, but that's not really what lawyers "do" in the same way accountants don't spend their days doing math.
If they take the risk then they should suffer the risks should it fail. Disbar them
Totally. However, I think so long as you manually verify, it should really be fine. It takes ages to find a case that establishes precedence, but confirming the details of the case once you've found it is relatively quick.
If you skip the manual verification, yeah you deserve what you get.