This past spring, one of my professors at the University of Washington graded the papers her class wrote using an AI tool. I find this unacceptable and I feel true relief that the bulk of my college experience was before the advent of widespread acceptance and overutilization of artificial intelligence. This experience was so alienating and upsetting that I feel a serious need to discuss some of the reasons why academia should not welcome AI with open arms, and how it felt to be writing papers that I knew no human being would ever read. Ironically, partially in response to the pervasive and growing presence of AI in many different career fields, I chose to take a nutrition / cooking class with Dr. Gloster, NUTR 241. Dr. Gloster is part of UW’s initiative to test AI tools in the classroom. While reading the syllabus for the course I found a single line on the fifth page which mentioned the class’s utilization of an AI grading platform, but imagined that it would likely be akin to turnitin.com or a similar app meant to detect the usage of AI along with plagiarism. Much to my chagrin, I was wildly incorrect. It is remarkable that the University of Washington is willing to perform what amounts to an experiment or trial on a group of students with barely a hint of lip service to informed consent; I would assert that the class would struggle to fill seats if the professor did her due diligence and included a major disclaimer about her intended usage of AI, and thus it was tucked deep in the syllabus. It is also genuinely possible that the professor simply generated her syllabus using AI and did not ever consider whether or not this essential information was easily available to students.
AI tools are not exactly the only major threat facing education and academia today – withdrawal of funding and deep-rooted student disengagement that has exploded since the COVID-19 pandemic are grievous issues that must be faced head-on. Education has become increasingly expensive, prompting prospective students to make a detached, pragmatic choice about their course of study and approach to academics; many just want to get their piece of paper and move on with the professional legitimacy that it confers without the hassle of actually learning anything and exploring their interests. This breakdown of education and intrusion of what I perceive to be effectively job training into liberal arts and hard science spaces has prompted a particular sort of student whose myopic educational aims are at once understandable and deeply tragic to become the typical American college kid. To borrow a particular framework that has very different origins, I will argue that what first emerges as the tragedy of student disengagement recurs as farce when professors – and academic institutions – join them in disengaging. If we are to appreciate AI for any reason, it is as a spotlight that shines on points of dysfunction and disengagement within the academic system, inviting educators and students to think creatively and find meaningful, human, connective experiences that allow for learning yet not for the externalized simulation of the same.
It is worth discussing the structure of the AI-evaluated assignment and the various attempts at AI integration in the course. The course was structured around cooking 6 dishes in various categories which were the focus of the weeks’ respective lectures, writing a 1000-1500 word essay about the food science involved in the recipe, and creating a slideshow of images and a few short responses of the student preparing the food. I will concede that it would be very difficult to evaluate 150 essays each week, even given Dr. Gloster’s access to two reader-graders and the limitations of their hours due to nationwide austerity measures being taken as funding is being wrenched away from schools by the current administration. However, this class created a problem to test a solution on a group of barely-informed students, and that is inexcusable. There was no serious need for the essay format for evaluation, especially if the instructional team had absolutely zero interest in actually engaging with the papers. AI, from my perspective, represents a fantastic tool to engage in tasks that were genuinely (or nearly) impossible before: studying medical imaging to be able to find and diagnose diseases far before the human eye can do so, processing hundred-thousand-page datasets to try to find trends, etc., but the tool appears to more frequently intervene in processes we are absolutely capable of – emotional and intellectual tasks in particular. Indeed, I may even be willing to admit that that size set of essays is not meaningfully gradable by a person (even 3 of them) over the course of a week, but there are so many other options for assignments and evaluation that do not involve the total degradation of the students’ time. I must also ask: why are people so gleeful about using tools that represent a clipping of their humanity and spell doom for their career prospects?
Speaking of the instructional team’s lack of interest in engagement, it is evident that the professor wanted to limit communication with students as much as possible in the class. She set up an AI chatbot that we were meant to communicate with instead of asking her or her reader-graders (I keep using this phrase because they are specifically not TAs, so their job is slightly different) questions, which I chose not to sign up for and she later took down entirely due to accessibility concerns because it required an external Google account. Additionally, she refused to negotiate any grade change fewer than 9 points, an absolutely galling stance to take given her deployment of unreliable and ambiguous generative technology to grade papers. Finally, of course, the usage of an AI grading platform and the complete absence of comments from Dr. Gloster represent the most significant deprivation of professor-student communication, as grading comments are the central intermediary in large intro classes. This willingness to completely separate from the student and eschew massive portions of a professor’s instructional responsibilities is reflective of something I honestly empathize with: despair at the relative disinterest of students. I read one particular comment from Dr. Gloster’s RateMyProfessor page that demoralized me almost as completely as her choice to utilize AI: “I think people were put off by AI grading, which really wasn’t a big deal as she said you’d get a harsher grade if she graded it.” (https://www.ratemyprofessors.com/professor/2151824) Two questions: did this student take what was effectively a veiled threat as a positive about her use of AI? And is the only thing we are here for the number that you receive at the end of the course? Furthermore, the (often-sparse) comments I receive from professors are truly important to me and prompt growth because I respect their expertise. I found zero value in the three-page document that was returned to me only a few hours after I submitted my essay. I should not have to compromise in this way when I paid $43,000+ annually for this experience, but I would prefer one sentence from a real human being and expert than a dozen pages of fluff from an AI.
A central reason for this particular statement and position is that AI is unreliable and not representative of a human’s experience with reality or even the sum of its constituent inputs and training data – it is generating natural-sounding text that has no material basis in reality, and is thus an awful, awful tool for evaluation. Additionally, the application of AI standards to what should be college-level academic writing will likely serve only to degrade the skill development of students. To take a quote from the Teaching@UW site, “Because the responses are so quick and varied, it is easy to mistake generative AI output for actual human thought. But generative AI’s outputs are not thoughtful and only appear to answer questions. As AI scholar Kate Crawford notes, “AI is neither artificial nor intelligent” – AI systems don’t actually know anything. Generative AI doesn’t think. Instead it produces outputs that conform to the patterns evident in the datasets that trained them.” (https://teaching.washington.edu/course-design/ai/) On what planet should we be allowing something that ‘(doesn’t) actually know anything,’ is ‘not thoughtful,’ and ‘only appear(s) to answer questions’ to grade and present feedback to the student as if it were a legitimate source of information? I find this wildly denigrating, if not disqualifying.
I sympathize with Dr. Gloster’s reaction and frustration with her students and the nearly-impossible challenge of engagement in a large intro-level course, but she did herself very few favors with some of her choices as things progressed this quarter. About halfway through the quarter the AI grading platform which she was using was hacked, which represents a serious breach of FERPA. Indeed, this was the one area in which she felt a need to defend her utilization of the platform initially, asserting during the first week of class that the application was FERPA-compliant. After the hack, however, Dr. Gloster and her two reader-graders were left with a stack of ungraded student papers. Their choice to simply hand out As – not to simply evaluate student work and actually deign to look at the papers their students spent at least a modicum of time producing – perfectly reflects the ethos of AI grading and the completion of the circle of disengagement. Dr. Gloster made the choice to simply not do the work, instead turning to a Canvas quiz in order to evaluate our understanding of food science concepts. This choice on its own isn’t necessarily a bad one, but it does lead me into another significant drawback of any usage of AI in curriculum: all other information is thrown into question. I have no doubts in my mind that Dr. Gloster is an expert food scientist and culinary/nutritional educator based on her ability to answer my questions and long track record, but when she is willing to automate such an essential portion of her job, many of the materials that she presented as her own began to seem suspicious. The syllabus, as I have pointed out, may have failed to mention AI grading until the 5th page not due to any specific deception on the part of Dr. Gloster, but instead because she didn’t write it herself. Indeed, it is likely in that she also used AI to create the quizzes and short answer question bank given the circular, ultra-repetitive nature of the questions.
The gleeful substitution of AI in place of human creative and emotional interaction for the sake of a slightly smoother professorial experience flies in the face of the purported goals of academia and reflects the horrible legacy of (post)modern technological development that we as students and members of society will have to contend with for far longer than those making choices out of a selfish expediency and with a clear lack of foresight. Writing those papers KNOWING nobody would read my words was a troubling, isolating, overwhelming feeling, and having to go into class some days knowing that Dr. Gloster would go on a strange, increasingly defensive diatribe about the earth-shattering impact of AI left me feeling hopeless. I write this not to wound Dr. Gloster, but to try to make her understand that her disengagement – long time coming though it may have been – represents something at once tragic and comical, something so dangerous for generations of future students that she will never have to engage with; her choice to participate in UW projects that directly undermine the process of academia has an impact farther-reaching than her myopic late-career desire to work less often.
One of the choices that Dr. Gloster made that I found almost as distasteful as her desire to automate a crucial portion of her job was trying to sweep it under the rug. She sent out a message stating her intention to hold class/professor evaluations in person specifically using paper and pencil. Her plan was to preclude those students who were no longer actively attending class from the evaluation process, which she later defended in a wildly unprofessional way. I would like her to understand that for an intro-level class, the feedback that is truly valuable is from the students who are being left behind, especially when the fraction of students that fit that label was as high as 2/3rds of the class. She wanted feedback from those who managed to hold on and were either interested enough in cooking or unbothered enough by her denigrating, invalidating usage of AI to sit through her often-somewhat-hostile lectures, going so far as saying that the other comments she would receive would be of no value because they would be “hate;” this is not the behavior of a dignified professor.
I will concede that my interaction with the UW nutrition/food sciences staff (including the program lead and one of the counselors) was largely positive, but is reflective of the fundamental limitations of the academy; the various restrictions placed on the Nutrition group given their existence as a program but not a department means they have only partial control over Dr. Gloster’s activities, but they appear to be taking accountability. I am disturbed slightly by the program head’s desire to continue to try to jam the square peg of AI into the round hole of academia even after this total debacle, but it is very difficult for these people at the UW to stop something that is a UW initiative codified in the UW rules as proper; I simply hope they reconsider. I aim to be involved in as many serious conversations at the university about AI as possible to advocate against its usage and presence.
To close, please think carefully about the larger ramifications of AI, not just its immediate boons to your ease and convenience. There is immense value in the small human social interactions that constitute the sinew of our civilization, and we won’t know what we’ve lost until it’s all simulated for the sake of efficiency. Institutions like UW have the rare albeit challenging opportunity to create and abide by rulesets predicated on ethics, and I believe that given a more rigorous ethical review it would be evident that deploying AI in most academic spaces is not only destructive to the ethos of the university but representative of a resignation to corporate interests. AI as it is currently being deployed is like a spotlight on which areas of the college experience need to be improved and reformed; an excellent example of this is the rubric, something that has become an expectation from students that professors seem to not be particularly interested in (I have spoken with at least three who use AI to create this type of document). This doesn’t mean we should cheapen the experience with AI materials, but think critically about how to better return feedback to students. Finally, this relationship between student and professor disengagement can never look like a race to the bottom. Student disengagement is a tragedy prompted by structural factors and needs to be seriously addressed, not by professors meeting them with loosening standards and ushering everyone through. This renders the ostensibly-dignified process of a college education a collective pantomime act; we’re all just going through the motions. Professors need to remember that they are public figures with serious normalizing power; the ability to set a precedent for both their students and their institutions, and when universities allow – if not directly commission – disengagement to join that precedent, the future only becomes bleaker.
Leave a comment