Linguistic analysis of great vs. average legal writing

Post by Stephen Horowitz, Professor of Legal English

As a legal English professor in Georgetown’s 2-Year LLM Program and a “law & language” nerd, I greatly appreciate any efforts to analyze and identify concrete elements of legal writing that help distinguish the quality or genre of the writing. (See, e.g., some of my experiments with ChatGPT and legal writing as a grammar fixer and on cohesion.)

For my international LLM students, this kind of information can be exponentially more helpful to understand that, e.g., dependent clauses can help one’s legal analysis come across more cohesively, as opposed to suggestions to “Include more analysis” or “Be more concise.” A dependent clause is an objectively defined thing that you can hang a hat on. And even if a student doesn’t know what it is or how to recognize or construct it, it’s something that is very learnable.

I was therefore very excited to come across the below Twitter thread from UNLV Legal Writing Professor and founder Joe Regalia today, explaining that he was in the midst of a linguistic comparison involving 10 court opinions written by Supreme Court Justice Elena Kagan (renowned for the quality of her writing) and 50 legal briefs written by various lawyers. In his tweets he shares some early observations from the analysis. Though alas, it’s just a teaser and it seems like we’ll have to wait for the full report or article to come out at some point to see the rest. If this were published as a book, I would be right there in the line outside Barnes & Nobles with all the other lawyer linguists waiting to get one of the first copies, a la Harry Potter mania.


Continue reading “Linguistic analysis of great vs. average legal writing”