The article didn’t include any numbers on what the general lawyer population is compared to the results.
For example, they make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations. However, without the base rate for briefs files by solo or small firms compare to larger firms, we don’t know if that is unusual or not. If 50% of briefs were files by solo firms and 40% were filed by small firms, then the data would actually be showing that firm size doesn’t matter.
I dunno. By revenue, legal work in the US is super top heavy - it's like 50%+ done by the top 50 firms alone. That won't map 1:1 to briefs, but I would be pretty shocked if large firms only did 10% of briefs.
> They make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations.
Show me where she makes any predictive claims about likelihood.
The analysis find that of the cases where AI was used, 90% are either solo practices or small firms.
It does not conclude that there's a 90% chance a given claim using AI was done by solo or small firm,
or make any other assertions about rates.
> This analysis confirms what many lawyers and judges may have suspected: that the archetype of misplaced reliance on AI in drafting court filings is a small or solo law practice using ChatGPT in a plaintiff’s-side representation.
That is an assertion which requires numbers. If 98% of firms submitting legal briefs are solo or small firms, then the above statement is untrue. The archetype if my prior sentence is true, would be not-small/solo firms.
"all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal.
Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that the AI produced hallucinated content."
While a good idea, the database is predicated upon news, or people submitting examples as well. There may be some scraping of court documents as well, it's not entirely clear.
Regardless, the data is only predictive of people getting called out for such nonsense. A larger firm may have such issues with a lawyer, apologize to the court, have more clout with the court, and replace the lawyer with another employee.
This is something a smaller firm cannot do, if it is a firm of one person.
It's a nice writeup, and interesting. But it does lead to unverified assertions and conclusions.
Her balance was $47,892 when she woke up. By lunch it was $31,019. Her defense AI had done what it could. Morning yawn: emotional labor, damages pain and suffering. Her glance at the barista: rude, damages pain and suffering. Failure to smile at three separate pedestrians. All detected and filed by people's wearables and AI lawyers, arbitrated automatically.
The old courthouse had been converted to server rooms six months ago. The last human lawyer was just telling her so. Then his wearable pinged (unsolicited legal advice, possible tort) and he walked away mid-sentence. That afternoon, she glimpsed her neighbor watering his garden. They hadn't made eye contact since July. The liability was too great.
By evening she was up to $34k. Someone, somewhere, had caused her pain and suffering. She sat on her porch not looking at anything in particular. Her wearable chimed every few seconds.
Why wouldn't some of the smarter members of the fine, upstanding population of this fictional world have their assets held in the trust of automated holding companies while their flesh-and-blood person declares bankruptcy?
AI lawyers, wielded by plaintiffs, are a godsend to defendants.
I’ve seen tens of startups, particularly in SF, who would routinely settle employment disputes, who now get complaints fucked to the tee by hallucinations that tank singularly the plaintiffs’ otherwise-winnable cases. (Crazier, these were traditionally contingency cases.)
My lawyer.
Who used Claude, according to the invoice, and came to court with a completely false understanding of the record.
Chewed out by the judge and everything.
The article didn’t include any numbers on what the general lawyer population is compared to the results.
For example, they make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations. However, without the base rate for briefs files by solo or small firms compare to larger firms, we don’t know if that is unusual or not. If 50% of briefs were files by solo firms and 40% were filed by small firms, then the data would actually be showing that firm size doesn’t matter.
I dunno. By revenue, legal work in the US is super top heavy - it's like 50%+ done by the top 50 firms alone. That won't map 1:1 to briefs, but I would be pretty shocked if large firms only did 10% of briefs.
> They make the claim that solo and small firms are the most likely to file AI hallucinations because they represent 50% and 40% of the instances of legal briefs with hallucinations.
Show me where she makes any predictive claims about likelihood. The analysis find that of the cases where AI was used, 90% are either solo practices or small firms. It does not conclude that there's a 90% chance a given claim using AI was done by solo or small firm, or make any other assertions about rates.
> This analysis confirms what many lawyers and judges may have suspected: that the archetype of misplaced reliance on AI in drafting court filings is a small or solo law practice using ChatGPT in a plaintiff’s-side representation.
That is an assertion which requires numbers. If 98% of firms submitting legal briefs are solo or small firms, then the above statement is untrue. The archetype if my prior sentence is true, would be not-small/solo firms.
The background data is also suspect.
https://www.damiencharlotin.com/hallucinations/
"all documents where the use of AI, whether established or merely alleged, is addressed in more than a passing reference by the court or tribunal. Notably, this does not cover mere allegations of hallucinations, but only cases where the court or tribunal has explicitly found (or implied) that the AI produced hallucinated content."
While a good idea, the database is predicated upon news, or people submitting examples as well. There may be some scraping of court documents as well, it's not entirely clear.
Regardless, the data is only predictive of people getting called out for such nonsense. A larger firm may have such issues with a lawyer, apologize to the court, have more clout with the court, and replace the lawyer with another employee.
This is something a smaller firm cannot do, if it is a firm of one person.
It's a nice writeup, and interesting. But it does lead to unverified assertions and conclusions.
Answer: Solo practitioners and pro-se litigants.
> Pro-se litigants
I wonder when we're going to see an AI-powered "Online Court Case Wizard" that lets you do lawsuits like installing Windows software.
Her balance was $47,892 when she woke up. By lunch it was $31,019. Her defense AI had done what it could. Morning yawn: emotional labor, damages pain and suffering. Her glance at the barista: rude, damages pain and suffering. Failure to smile at three separate pedestrians. All detected and filed by people's wearables and AI lawyers, arbitrated automatically.
The old courthouse had been converted to server rooms six months ago. The last human lawyer was just telling her so. Then his wearable pinged (unsolicited legal advice, possible tort) and he walked away mid-sentence. That afternoon, she glimpsed her neighbor watering his garden. They hadn't made eye contact since July. The liability was too great.
By evening she was up to $34k. Someone, somewhere, had caused her pain and suffering. She sat on her porch not looking at anything in particular. Her wearable chimed every few seconds.
Why wouldn't some of the smarter members of the fine, upstanding population of this fictional world have their assets held in the trust of automated holding companies while their flesh-and-blood person declares bankruptcy?
Very good. I'd read the whole thing if you wrote it.
Been a while since I read some bad scifi - thanks!
[dead]
AI lawyers, wielded by plaintiffs, are a godsend to defendants.
I’ve seen tens of startups, particularly in SF, who would routinely settle employment disputes, who now get complaints fucked to the tee by hallucinations that tank singularly the plaintiffs’ otherwise-winnable cases. (Crazier, these were traditionally contingency cases.)