I recently tried a simple yet potentially helpful ChatGPT activity with my LLM students to (a) build individual grammar awareness, (b) build a better understanding of the benefits and limitations of using ChatGPT to fix one’s grammar, and (c) gain a better understanding of what happens grammatically when ChatGPT is asked to fix grammar.
As part of the Legal English II course (which teaches US case reading and analysis via a series of Supreme Court decisions about Miranda rights to students in Georgetown Law’s 2-Year LLM program), my students were required to write an essentially IRAC-style answer in response to a fact pattern under timed conditions.
Afterwards, as an assignment, I asked my students to input their essay into ChatGPT with the instruction to “Please fix any language issues in this essay:“
Students then had to compare the two versions of their essay and write a short analysis or commentary on what they noticed, what ChatGPT did/didn’t do well, how they felt about it, etc. I told students to either put the two versions in a table so they could compare the language side by side, or they could do a use the redline/track changes function to show the differences.
I next reviewed the students’ submissions myself. And I then invited two Georgetown Legal English colleagues with PhDs in applied linguistics–Prof. Julie Lake and Prof. Heather Weger–to review the student submissions and then have a group discussion about what we noticed.
Upon additional consideration (and inspired by a suggestion from Jack Kenigsberg, a former Hunter MA TESOL classmate), I took one paragraph from one student’s essay and fed it into ChatGPT with the instruction: “Fix any grammar errors in the quoted text. For each change you make, explain why you made the change.” And after it provided its answer, I clicked “Regenerate response” to create a second response to see what (if anything) came out different a second time.
The main takeaways by my students, my colleagues and myself were: