Advertisement
AI teaching assistants are no match for human beings
Morehouse College recently announced plans to use AI teaching assistants (TAs) in some of its classes this fall. The new AI TAs are represented by 3D spatial avatars and rely on technology from OpenAI — the company that makes ChatGPT — for conversational interactions.
The AI TAs are supposed to help “crack the code of education,” but as any educator can tell you, the myriad functions and responsibilities of teaching can’t be “cracked” by a single code. That’s why teaching is so difficult. No matter how long an educator has been on the job, every week — if not every day —something unexpected and challenging happens that requires us to draw from our experience as teachers and humans. Approaching teaching as a code to “crack” perpetuates a problem with education: the minimizing and devaluing of both educators and students.
Using AI in the classroom isn’t new, but these TA avatars move a step beyond how colleges and universities have typically used this technology. Morehouse’s AI TAs aren’t your average chatbots — one executive involved in their development called them “a professor’s digital mini-me.” Students can access these TA avatars in the virtual classroom space via a web browser, and the avatars can answer questions aloud, in the student’s native tongue, 24/7. Those capabilities offer some clear benefits.
However, a virtual TA can’t relate to students, especially struggling ones. It can’t make class a dynamic experience or facilitate the kind of bonding that makes classroom discussions work. Morehouse professor Muhsinah Morris, who helped roll out Morehouse’s AI TA program, says that the virtual TAs will make “education a more loving, enjoyable experience.” Such a claim is difficult to believe, especially after everything COVID taught us about the limitations of remote learning and the social and academic value of in-person education.
The National Education Association, which recently released a policy statement about the use of AI in the classroom, is skeptical, too. It calls for educators to guide practices about AI use in teaching, and it insists that “students and educators remain at the center of education” and that the use of AI “should not impair or displace” the connection between students and teachers.
The idea of outsourcing the TA role to AI is problematic not just for students, but for TAs, too. TAs work to financially support their own graduate school educations while getting experience and training as educators. They’re usually employed to run smaller labs or discussion sections for very large lecture classes, which are often prerequisites for more advanced study. TAs often serve as the first point of contact for class-related (and sometimes unrelated) questions and challenges. They often interact with and get to know students better than the actual professor does, which is arguably the most important aspect of their role.
It’s important to note that Morehouse doesn’t currently have TAs, so these TA avatars won’t replace human workers there. However, every major research institution (and most larger schools) do use TAs for bigger lectures and required courses. Other schools — especially those that are dealing with or have recently dealt with TA strikes, including Boston University (where I teach) — will watch the rollout of Morehouse’s AI TAs with great interest.
Professor Morris predicts that every professor will have an AI TA in a matter of years, but I hope that’s not the case — I certainly won’t. And I can’t help but wonder what comes next: Will we eventually replace all professors with AI?
Don’t get me wrong: Technology has its place. Remote and online classes should exist for people who can’t access or afford in-person instruction. And to accommodate different types of learners, education should come in lots of different shapes and sizes. But before students enroll or pay tuition, they should know whether they’ll have human or AI instructors and whether their courses will feature real-time interactions.
Morehouse’s tuition for 2024-2025 is just under $29,000. With housing, books, and other fees, the projected cost for one year of school is over $52,000. That’s a lot of money to spend on instruction supported by non-humans. Increasingly, many schools cost considerably more than Morehouse: Harvard, Boston University, Brown, Dartmouth, USC and others charge more than $90,000 per year. At that price, a top-notch, real-time education from human instructors, not avatars that parrot ChatGPT, should be a given.
Advertisement
In its announcement, Morehouse cites the teaching shortage in the U.S., which continues to worsen. This crisis can’t be ignored. And neither can the ballooning cost of a college education. But outsourcing teaching to AI isn’t the answer to either problem.
Replacing human TAs with an AI that uses the same technology many schools and teachers prohibit seems hypocritical at best and irresponsible at worst.
Reporters at the Chronicle of Higher Education test-drove Morehouse’s TA avatars. One tester commented on how slowly the TAs loaded, which could drive students looking for a quick answer to Google or another search engine. Another tester noticed that the AI often forgot what it had previously said, requiring the user to summarize the earlier conversation before asking additional questions. And if a student asks an AI TA about something that isn’t in the information upload the professor provides, the TA will turn to a large language model from OpenAI to answer the question.
Like many professors, I don’t allow students to use Open AI’s ChatGPT in my writing and research classes for many reasons, including that the information it provides is often wrong, doesn’t cite references and is sometimes a fabrication or “hallucination”. Replacing human TAs with an AI that uses the same technology many schools and teachers prohibit seems hypocritical at best and irresponsible at worst.
Depending on the subject, it may make sense for busy educators to use AI to perform tasks such as scheduling, polling and multiple-choice grading so they can devote more time to planning, student interactions and teaching itself. And it absolutely makes sense to teach students AI literacy and how to use specific AI programs as research or coding tools. But any use of AI by either teachers or students should be in the service of teaching critical thinking, which is the primary purpose of college.
OpenAI trains language and data models on internet content, including Reddit and its labyrinth of sub-reddits, many of which contain more (often bot-generated) fiction and opinion than fact. If algorithms such as the ones used by OpenAI are, as mathematician and statistician Cathy O’Neil puts it, “opinions embedded in code,” then using AI TAs perpetuates the same information, misinformation and biases that already dominate online content and discourse. That’s the antithesis of critical thinking.
Outsourcing to AI the teaching of skills it both lacks and undermines doesn’t benefit students or teachers. If we want students to think critically about how, when, where and why to use AI, among other things, then humans need to do the teaching. As neat as it is, a virtual “mini-me” ultimately devalues educators and students by suggesting that animated characters relying on flawed databases could or should be a substitute for people who have experience and expertise as teachers and humans.