A founder recently came to me to review an agreement to bring on a co-founder that he had generated on ChatGPT . The document was a “term sheet”, with plausible looking terms on confidentiality, equity compensation, IP ownership etc. Was the AI bot taking my legal job already, just as Goldman Sachs said it would in a recent report?
A closer read of the document revealed several common problems with using AI large language models for legal work (at least right now). First, they produce text that looks sensible and coherent, but actually includes potentially dangerous hallucinations in their responses. For example, the AI had proposed an unorthodox vesting schedule for the co-founder’s equity involving two overlapping and inconsistent timelines. Had an equity grant been based on this, it’s very likely that the confusion and uncertainty would lead to a dispute once a human lawyer laid eyes on it. A lawyer’s (sometimes frustrating, I know) insistence on using accurate, precise language can mean the difference between a run-of-the mill stock issuance vs. costly litigation years down the road.
Second, the bot decided to generate its own text rather than using a well-worn, universally understood template that have been drafted by the world’s top law firms. As a result, the agreement was about 90% accurate, but still left sufficient gaps that made me concerned this agreement would not achieve a “meeting-of-the minds” (without which there is no enforceable contract). For a lawyer, reading ChatGPT’s attempts at legalese looks like one of our really smart but not-legally trained friends drafting a contract. It mostly works, and might even convey the feel of contract language, but breaks down when we dive into the details. Any professional reading an outsider’s attempts at their industry jargon would have the same reaction.
Third, the AI’s output revealed a total lack of judgment. While it was capable of generating language that mostly achieved the user’s requested outcome, it didn’t stop to ask if this type of contract was a good idea in the first place (it wasn’t) or if the terms were fair and calculated to achieve the founder’s business goals (they weren’t).
Chatbots are currently easy-going, professional-sounding assistants, but they typically only answer the exact question they are asked, instead of bringing their experience to bear on how to think about a problem. As far as we know, they do not even have subjective experience to draw on. Entering into a signed, binding term sheet to establish cofounder relationship is an unorthodox and risky move because it completely circumvents all the usual norms for corporate governance, employment law, and intellectual property ownership that typically goes into these arrangements. I would have told my client that, but the chatbot did not because it wasn’t asked.
I’m not some kind of A.I. Luddite. I’ve been excited to use the various legal tools that have been coming out recently. I’m even keen to use smart automation for some of the more rote aspects of my practice to free up my time for the more interesting legal questions that my clients pose. This founder was wise to have the AI’s work checked by a human who is knowledgeable in the area of interest. That will remain a best practice for a long time to come.