Dynadot

Do not use ChatGPT (or any other AI) for legal research

NameSilo
Watch

Future Sensors

78% of human domainers will be replaced by robotsTop Member
Impact
23,259
1685206621489.png



1685206635311.png


Source: https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/ | Mike Dunford on Twitter.
 
Last edited:
2
•••
The views expressed on this page by users and staff are their own, not those of NamePros.
Lol.

Yeah, it will totally make stuff up. It seems to come up with stuff that “looks right,” but if you dig a little bit, it is sanctions bait.

It can be a useful tool, like anything else, but blindly relying on it is folly.
 
2
•••
AI really is the end of the world. Not necessarily through it itself becoming self aware but maybe because it'll give fuckwits a pathway into jobs of high importance (like private schools do for governments).
 
0
•••
"ideally" AI should be able to review all cases and flag errors, lawyers, judges, etc.

unfortunately, currently it may give me false quotes, sources, mathematical mistakes, etc.
 
0
•••
Do Language Models Know When They’re Hallucinating References?

(Research, submitted 29 May 2023)

Language models (LMs) famously hallucinate1, meaning that they fabricate strings of plausible but unfounded text. As LMs become more accurate, their fabrications become more believable and therefore more problematic. A primary example is “hallucinated references” to non-existent articles with titles readily fabricated by the LM. For instance, a real New York Times article entitled “When A.I. Chatbots Hallucinate” leads with a ChatGPT2-fabricated New York Times article titled “Machines Will Be Capable of Learning, Solving Problems, Scientists Predict”. In this work, we study the problem of hallucinated references.

Read more:

https://arxiv.org/abs/2305.18248

Link to PDF paper:

https://arxiv.org/pdf/2305.18248.pdf


1685450945044.png
 
Last edited:
0
•••
  • The sidebar remains visible by scrolling at a speed relative to the page’s height.
Back