AI Legal Hallucinations and Unsolved Erdös Problems
We’ve just seen the first example (Maybe “one of the first.") of non-legal AI hallucinations in the wild, where ChatGPT claimed to solve ten previously unsolved Erdös math problems. Heretofore, AI hallucinations were confined to “invented” legal cases. But is this really a problem? Lawyering is an easy target for AI hallucinations because cases are the bricks of any legal structure and there is simply so much law with new cases coming out every day that a missing brick can easily be replaced with an invented one. We all know 𝑴𝒊𝒓𝒂𝒏𝒅𝒂 right? Do you know the full cite offhand? Probably not. Is there any State that hasn’t followed 𝑴𝒊𝒓𝒂𝒏𝒅𝒂? No. So there’s really no harm to make one up, right? That’s the trap AI has fallen into, because it’s not really a reasoning model.
Has there been a single case where AI invented a case that purported to overturn 𝑴𝒊𝒓𝒂𝒏𝒅𝒂? No. Or 𝑩𝒓𝒐𝒘𝒏 𝒗. 𝑩𝒐𝒂𝒓𝒅 𝒐𝒇 𝑬𝒅𝒖𝒄𝒂𝒕𝒊𝒐𝒏? Instead, AI hallucinations (for the most part) merely “support” well-founded legal principles. It’s a safe bet that Wyoming has its own version of the Statute of Frauds, or simply follows the common law. A hallucinated case that states this does no real harm. If a common law jurisdiction were to throw out the Statute of Frauds, that would be astonishing and a hallucinated case that so held easily discovered.
ChatGPT’s AI just claimed it had solved 10 previous unsolved Erdös math problems. Turns out the problems had been solved by others but the Erdös math problems web page hadn’t been updated with the solutions. ChatGPT merely found the papers where mathematicians had solved the problems and presented their work as its own.
There really isn’t too much of a danger with fake legal citations. Let’s not clutch our pearls in horror just yet. Lots of legal work doesn’t involve case citations anyway. Local citations are useless in international business and arbitrations. It could be well-argued that other than crim law and personal injury, they don’t really matter. If hallucinations cause a decrease in published opinions, this is probably a good thing, especially when a subscription to the Federal Reporter comes in at an eye-watering $30k per annum.