LE Journal: ChatGPT conversations with Ukrainian legal English faculty

Post by Stephen Horowitz, Professor of Legal English. LE Journal is an opportunity to share some of the current goings-on of Georgetown Law’s Legal English Faculty.

Professors Julie Lake and Heather Weger met via Zoom this week with four Ukrainian philologists (i.e, historical linguists) to discuss pedagogical approaches and the use of Chat GPT in Legal English classrooms.

The Ukrainian legal English faculty members were Anetta Artsysshevska, Nataliya Hrynya, and Lily Kuznetsova from Lviv Ivan Franko National University and Olena Zhyhadlo from Taras Shevchenko National University of Kyiv Law School

We enjoyed a fruitful conversation about our collective successes and challenges, and we plan to meet again in February to continue the conversation.

The relationship evolved from a larger effort initiated by the Global Legal Skills community back in 2022 to foster connections and collaboration among law and legal English faculty in Ukraine

Georgetown Law on AI in the TESOL Applied Linguistics Newsletter (Sept 2023 issue)

The September 2023 issue of AL Forum (the applied linguistics newsletter for TESOL) is out, thanks in part to contributions from several members of the Georgetown Law faculty. And the theme is language, teaching, and generative AI.

Co-edited by Georgetown Legal English Professor Heather Weger  and George Washington Teaching Associate Professor Natalia Dolgova, it leads with a letter from the editors and includes two articles by Georgetown Law colleagues.

AL Forum: The Newsletter of the Applied Linguists Interest Section

1. Letter from the Editors, by Prof. Natalia Dolgova and Prof. Heather Weger

“In this issue, you will find leadership updates summarizing past and future ALIS activities, and this issue provides a closer look at how educators are grappling with the impact of generative artificial intelligence (AI) technology, such as ChatGPT, on our field.”

2. Generation GPT: Nurturing Responsible AI Usage in College Curricula, by D. Ellery Boatwright, Instructional Technologist

“This article offers some resources and advice to consider as you make informed decisions about integrating AI into your workflows and prepare students to inhabit an AI-rich world.”

3. ChatGPT Experiment: Creating an Online Vocabulary Course for Legal English, by Prof. Stephen Horowitz

“A detailed case study of his experimental use of ChatGPT to design teaching materials for a vocabulary course, [including] examples of how to prompt ChatGPT to generate materials (e.g., quizzes and practice activities.)”

For more articles and issues of the AL Forum, click here.

New idea: ChatGPT and LLM interview language prep

Post by Stephen Horowitz, Professor of Legal English

“I give too much unnecessary detail when I talk about the work I’ve done.”

That was the complaint and concern of an LLM graduate who recently sought my legal English advice. He’s in the process of applying to jobs, but some native English speaking friends had told him that he doesn’t come across terribly well when he describes his past work experience.

How do you help a non-native English speaking LLM post-graduate in this situation? Is it a language issue? Or some other type of issue?

It’s probably at least in part a language issue, although when I spoke with this student, his spoken English was fairly strong. But it also may be a cultural discourse issue and perhaps even a function of the student’s own personal style as well.

Regardless, the challenge is the same: The student needs to figure out a strategy to absorb and internalize the language and discourse style of the professional community he’s trying to join. I like to think of it as learning to code switch.

My suggested solution to the student: Find examples of the kind of language you want to be able to produce. In this case, the student was looking for jobs in the field of tax law. So that meant finding recorded examples of people talking about their work as tax lawyers, ideally with a transcript or subtitles. YouTube is the obvious place to look, and videos do exist of tax lawyers talking about their work. Though it’s more about giving advice and explaining their job to people who know less about tax law than they do, which is a little bit different than an interview situation, where you’re likely talking to people who have more knowledge and expertise than you do. Interviewers also typically occupy a higher relative status than the interviewee in the context of the interview, and so the interviewee’s ideal language also likely factors in register, i.e., level of formality.

Continue reading “New idea: ChatGPT and LLM interview language prep”

Analyzing ChatGPT’s use of cohesive devices to help international LLM students improve cohesion in their writing

Post by Stephen Horowitz, Professor of Legal English, with special thanks to Prof. Julie Lake and Prof. Heather Weger for their time and linguistics expertise in analyzing and discussing the texts and editing this post, which is far more cohesive because of them.

Hot on the heels of my recent experiment to try and better understand ChatGPT’s view of improving language and grammar (See “Analyzing ChatGPT’s ability as a grammar fixer,” 2/23/23), I was grading my students’ timed midterm exams and noticed a paragraph in one students’ answer that had all the right pieces but decidedly lacked cohesion.

“….the biggest takeaway of all for this experiment…..ChatGPT can help instructors identify the kinds of cohesive devices that a student is not using and then support the student in learning to use and become more comfortable and familiar with those cohesive devices.”

So I mentioned this in a comment and gave some suggestions as to how to improve the cohesion in the paragraph. And then I had a thought:

Maybe ChatGPT can help!

Continue reading “Analyzing ChatGPT’s use of cohesive devices to help international LLM students improve cohesion in their writing”

Article: Using ChatGPT in legal writing

Post by Stephen Horowitz, Professor of Legal English

Prof. Joe Regalia

Joe Regalia, Associate Professor of Law at the William S. Boyd School of Law at University of Nevada Las Vegas, recently shared on the Legal Writing Institute listserv that he’s been working on a chapter of a book that he will be publishing with Aspen Publishing later this year—tentatively called Leveling Up Your Legal Writing: Techniques and Technology to Create Amazing Documents.

The chapter–still in draft form–aims to be a practical guide for using ChatGPT in legal writing and can be viewed at this link for free in PDF format:

https://ssrn.com/abstract=4371460

Joe noted that even though he hasn’t even added sources yet to the draft chapter, he wanted to share in case any of the ideas are helpful to folks exploring using GPT in their classes.

Continue reading “Article: Using ChatGPT in legal writing”

Analyzing ChatGPT’s ability as a grammar fixer


Post by Stephen Horowitz, Professor of Legal English

I recently tried a simple yet potentially helpful ChatGPT activity with my LLM students to (a) build individual grammar awareness, (b) build a better understanding of the benefits and limitations of using ChatGPT to fix one’s grammar, and (c) gain a better understanding of what happens grammatically when ChatGPT is asked to fix grammar.

The Process:

  1. As part of the Legal English II course (which teaches US case reading and analysis via a series of Supreme Court decisions about Miranda rights to students in Georgetown Law’s 2-Year LLM program), my students were required to write an essentially IRAC-style answer in response to a fact pattern under timed conditions.
  2. Afterwards, as an assignment, I asked my students to input their essay into ChatGPT with the instruction to “Please fix any language issues in this essay:
  3. Students then had to compare the two versions of their essay and write a short analysis or commentary on what they noticed, what ChatGPT did/didn’t do well, how they felt about it, etc. I told students to either put the two versions in a table so they could compare the language side by side, or they could do a use the redline/track changes function to show the differences.
  4. I next reviewed the students’ submissions myself. And I then invited two Georgetown Legal English colleagues with PhDs in applied linguistics–Prof. Julie Lake and Prof. Heather Weger–to review the student submissions and then have a group discussion about what we noticed.
  5. Upon additional consideration (and inspired by a suggestion from Jack Kenigsberg, a former Hunter MA TESOL classmate), I took one paragraph from one student’s essay and fed it into ChatGPT with the instruction: “Fix any grammar errors in the quoted text. For each change you make, explain why you made the change.” And after it provided its answer, I clicked “Regenerate response” to create a second response to see what (if anything) came out different a second time.

The Takeaways:

The main takeaways by my students, my colleagues and myself were:

Continue reading “Analyzing ChatGPT’s ability as a grammar fixer”

Tip of the day: How to use ChatGPT to figure out if a text was written by ChatGPT

Post by Prof. Stephen Horowitz, Professor of Legal English

I had a thought about a week ago that perhaps the best way to check and see if a text written by a student was in fact written by ChatGPT or other AI. And it turns out there is a way to do it, but it’s not the way I thought it might work.

I had already tested ChatGPT by giving it the same final exam fact pattern/question I had given to my LLM students. And ChatGPT was able to provide a pretty decent response.

So next, I took that response and fed it back to ChatGPT asking it if ChatGPT had written this text. But ChatGPT responded by explaining that it has no memory, etc. So I figured that was the end of it.

However, I learned today from a commenter on LinkedIn that, as in law school, it’s really about asking the right questions. And in this case, the right questions is:

“Is this text AI-generated?”

This led to a very definite response that, yes, this text was indeed AI-generated. Problem solved!

Update: I learned after publishing this post that this is not exactly “problem solved.” A colleague at another law school shared that he followed a process similar to mine, but ChatGPT believe that the text was in fact written by a human. So the “tip” is far from foolproof.

But I wasn’t satisfied, so I asked it how it knew. And it very quickly and confidently listed off several factors:

“As an AI language model, I can recognize patterns and characteristics in text that are typical of machine-generated text. In this particular case, the text is characterized by a high level of technical language, a formal tone, and a logical structure that is typical of texts written by AI. In addition, the text does not contain any errors or irregularities that would be expected if it had been written by a human. All of these features suggest that the text was generated by AI.”

Wonderful! But wait a second….Lots of people, including most law students at Georgetown and other law schools, are trained to write with these very same characteristics. So I asked, “Can you be more specific?”

And it essentially listed the same characteristics, but this time in a numbered format which made it appear much more specific and persuasive. Even though it wasn’t.

So then I asked it specifically, “What kinds of differences might distinguish a text written by a highly skilled writer and a text generated by AI?” But it listed qualities that might distinguish a human’s writing from AI, such as style, creativity, context and human touch. As a representative of the human race, I guess I’ll take those as compliments. But it still doesn’t provide any concrete examples as to how it can distinguish between a highly-skilled human writer and an AI app like ChatGPT.

In other words, ChatGPT was essentially borrowing from Supreme Court Justice Potter Stewart who famously said in his decision on obscenity, “I know it when I see it.” (Jacobellis v. Ohio, 378 U.S. 184 (1964))

Can ChatGPT help LLMs pass the bar exam?

The good news: Yes, it probably can!

The bad news: But it’s not the LLMs you’re probably thinking of.

I recently noticed in the abstract for the article “GPT Takes the Bar Exam” that the last line reads:

While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.

At first I did a double take and had to re-read the full abstract to understand how in the heck GPT’s relative success in answering bar exam questions could portend that one lucky future LLM student will pass the multiple choice section of the bar exam.

Then I remembered that LLM is different from LL.M. Because in the context of artificial intelligence, LLM means “Large Language Model” which is the term used to encapsulate what ChatGPT is and which is obviously very different than the Master in Laws or Legum Magister meaning which refers to a one-year degree at a law school and which is often associated with international students in US law schools.

This is clearly a distinction that those of us in the legal English field will have to get used to in order to avoid potential confusion in the future. It also suggests that the periods in “LL.M.” may need to come back in fashion for those out there (like me) who have been trying to get away with leaving them out in the name of efficiency.

Here’s the full abstract, in case of interest:

**********************************

GPT Takes the Bar Exam

13 Pages Posted: 31 Dec 2022

Michael James Bommarito

273 Ventures; Licensio, LLC; Bommarito Consulting, LLC; Michigan State College of Law; Stanford Center for Legal Informatics

Daniel Martin Katz

Illinois Tech – Chicago Kent College of Law; Bucerius Center for Legal Technology & Data Science; Stanford CodeX – The Center for Legal Informatics; 273 Ventures

Date Written: December 29, 2022

Abstract

Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as “the Bar Exam,” as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in “AI?” In this research, we document our experimental evaluation of the performance of OpenAI’s text-davinci-003 model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5’s zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5’s zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5’s ranking of responses is also highly correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.

Keywords: GPT, ChatGPT, Bar Exam, Legal Data, NLP, Legal NLP, Legal Analytics, natural language processing, natural language understanding, evaluation, machine learning, artificial intelligence, artificial intelligence and law

JEL Classification: C45, C55, K49, O33, O30

Suggested Citation:

Bommarito, Michael James and Katz, Daniel Martin, GPT Takes the Bar Exam (December 29, 2022). Available at SSRN: https://ssrn.com/abstract=4314839 or http://dx.doi.org/10.2139/ssrn.4314839

AI/ChatGPT as a tool for Legal English and LLM students


Post by Prof. Stephen Horowitz, Professor of Legal English

As we start to shift past the “wow” factor of AI and ChatGPT (see, e.g., this very cool post from the FCPA Blog posing questions to ChatGPT related to the Foreign Corrupt Practices Act, and also this academic article titled “GPT Takes the Bar Exam“), I’ve seen articles and social media posts and heard comments and commentary focused on the potential plagiaristic dangers of ChatGPT, the artificial intelligence-fueled chatbot that can produce complex, natural-sounding essays in a matter of seconds:

But my initial reaction was less of concern and more along the lines of, “What a great potential legal English tool! How can we use this to help our LLM students learn better?”

And this thinking feels connected to what I’ve read in articles like “AI and the Future of Undergraduate Writing” by Beth McMurtrie in The Chronicle of Higher Education which essentially says that the horse is out of the barn; how are we as teachers and educational institutions going to adapt our assessment methods and how can we use this as a teaching tool. (This is really the underlying point of “The End of High School English” as well.)

Some of my own tests of ChatGPT, by the way, have included:

1) To ask it to “write an essay comparing Marie Antoinette and Rachel Carson,” the idea being to see if it could find connections on two seemingly unrelated people. And it did this quite effectively, acknowledging the lack of connection but finding comparison and contrast in that they were women of different social status who had certain accomplishments. About as good as I could expect from any student given a similar question.

Continue reading “AI/ChatGPT as a tool for Legal English and LLM students”
css.php