ChatGPT isn’t going to make us slaves; uninspired education might

Feb. 4, Robert Hunt, director of Global Theological Education at the Perkins School of Theology at SMU Dallas and coauthor graduate student Drew Dickens, for a commentary about how the new ChatGPT online technology could impact higher education. Published in the Dallas Morning News under the heading ChatGPT isn’t going to make us slaves; uninspired education mighthttps://tinyurl.com/3yr753fj 

The new open artificial intelligence chatbot called ChatGPT is either the most helpful tool to come to academia in decades or a threat to human learning and higher education. Many schools aren’t waiting for this to play out and are outright banning it from classrooms while hurriedly retooling assignments and exams to prevent its use.

ChatGPT is a new online technology that could revolutionize communication with machines. It offers the potential for faster and more accurate conversations, with the ability to handle more complex tasks.

AI can’t replicate the human party of learning

By Robert Hunt and Drew Dickens

The new open artificial intelligence chatbot called ChatGPT is either the most helpful tool to come to academia in decades or a threat to human learning and higher education. Many schools aren’t waiting for this to play out and are outright banning it from classrooms while hurriedly retooling assignments and exams to prevent its use.

ChatGPT is a new online technology that could revolutionize communication with machines. It offers the potential for faster and more accurate conversations, with the ability to handle more complex tasks.

But users could co-opt it to ghost-write school assignments and thereby deceive unsuspecting professors. And though ChatGPT is a master of established, objective facts, it won’t ever be visionary. It’s clunky when it comes to subjective observations and nuances and will never engage in crucial classroom debates. The great risk is that students too reliant on AI tools may themselves become robotic and be less the spontaneous, spirited humans we wish them to be.

First, it’s important to establish the four foundational elements of traditional education:

Opinion

Get smart opinions on the topics North Texans care about.

  1. Mastery of a body of knowledge
  2. Understanding and using the accepted ways of analyzing this data
  3. Finding new knowledge
  4. And forming humane participants in a global society, an often overlooked element.

The learning algorithms behind ChatGPT and similar tools have already shown mastery of a body of knowledge and understanding of how to analyze data with enormous breadth and efficiency. ChatGPT can read everything from Shakespeare’s The Tempest to all we know about quantum field theory more quickly than a student can type out the search on Google. It exceeds any human in answering questions about what is already known.

Discounting events and information generated after 2021, or never digitized, it already knows more than the most knowledgeable professor on campus or all of them put together.

Likewise, ChatGPT can use all known approaches to analyzing a body of knowledge. If those approaches are reducible to some mathematical equation, then the AI can analyze and conclude. If the analytical procedures are less formal (critical race theory, feminist theory, post-colonial theory, etc.), then the AI can find human works using these forms of analysis and rephrase the results.

Its capacity for quick, coherent but unoriginal papers is impressive. The problem is that this is all we’ve asked for too often in higher education. Master what we already know and come to conclusions someone else has already reached. Who expects originality from a sophomore?

The third and fourth aspects of education are missing from ChatGPT and too many university courses: generating new knowledge and forming humane participants in a global society. These require the creativity and imagination to ask questions that haven’t been asked before, to imagine approaches to analysis that haven’t been used before, and to apply these in the social setting of a classroom to see how other humans respond. The failure to explore knowledge in a social environment is the greatest potential failure of higher education.

Regarding ChatGPT’s ability to generate new knowledge, it can create a list of essay topics or questions for an exam. And it can write those essays and answer those questions. It can even place these topics and questions in the larger body of such issues and queries and determine historically which were more or less important to people in the past. If you ask, it can generate random new topics and questions and, based on existing data, predict which will be most interesting to a particular demographic.

Yet this will all be impersonal, derived from mathematical models of what is statistically significant to larger or smaller groups of people as represented in its database. It won’t be new.

Regarding ChatGPT’s ability to form humane participants in a global society, no generative AI-assisted writing tool can be fully present in the individual experience of immediate reality and the connections formed within the human brain instantaneously in response to both its inner states and that external reality. Nor does it have any means to generate the insights these connections may bring.

ChatGPT can write a paper applying CRT to a well-described historical incident. It cannot write an essay using CRT to an individual’s unique experience of racist behavior or growing up amid structural racism.

The challenge here for academics is that the kind of objective knowledge and analysis which has been the Enlightenment/Modern ideal of learning is quickly produced by ChapGPT and AI more generally. As long as essays and tests are objective, ChatGPT will master them. But if they require that the student bring their subjective experience into play, then ChatGPT will fail. It has no personal experience, only reports of the emotional experiences of others.

Introducing subjectivity into the testing and grading process will take much work. It requires overthrowing the idea that all helpful knowledge is objective and that humans are mere computers who process data to reach conclusions. But subjectivity in the process of evaluation is hardly impossible.

Indeed, it is already the hallmark of contemporary pedagogy. The “sage on the stage” leading students through the well-trodden paths of an academic discipline is already passé. It is easily replaceable with recorded lectures and even AI-generated lectures.

The “guide on the side” model lets students explore for themselves, following their personal needs for knowledge, and nudging them only when they pass up something important that takes a human connection that an AI can only partially replace. Even less possible is that ChatGPT can express the intangible quality of passion for knowledge. Yet it is this that students, above all else, value in their teachers.

AI-assisted language models challenge schools and universities to refocus on their students as humans in the sense of being conscious and continuously engaged with their physical and social surroundings. Humans gain discernment regarding particular fields of study, but more importantly, about their fellow humans. Humans bring the value of their subjectivity to developing new knowledge and solving new problems.

If a university becomes a vocational school, where biological machines are given the knowledge and analytical tools to solve problems and increase productivity, the ChatGPT will not only help aid students; it might as well replace them. If we treat students like robots, robots will quickly take their place in the classroom and exceed their capacity to perform the tasks we give them.

If we engage students’ humanity at its most fundamental levels, then ChatGPT becomes another tool for doing research.

Teachers who test only for knowledge have faced the problems of cheating and plagiarism since the time of Confucius and Aristotle. ChatGPT offers a more sophisticated weapon in an ongoing battle. And there are ways to thwart it already being deployed. Indeed we can use AI to recognize the products of AI.

Yet we don’t need to go so far as lock-down browsers or retro-bluebook exams in proctored halls to determine whether students are learning anything. We must observe them interacting with their fellow students and us, seeking signs of imagination, creativity and insight. It is a more high-touch endeavor than the practice of many academics: lecturing to hundreds and passing essays off to an army of teaching assistants to grade.

Virtually every dystopian vision of an AI-controlled future finds humans relegated to being the servants of machines. ChatGPT isn’t going to take us there, but it does provide another shot across the bow. We need to return the university to its original purpose: the formation of humans as humans, and thus the ongoing discovery of what it means to be human.

If we take any other focus, eventually ChatGPT and its successors will gradually move from our servants to our masters, feeding on whatever complex neural connections computers do not yet have until even those become irrelevant.

Robert Hunt is director of Global Theological Education at the Perkins School of Theology and an affiliate professor of graduate liberal studies in the Simmons School of Education at SMU Dallas. Drew Dickens is a doctoral candidate at SMU focusing on the “Effects of Sacred Texts on Generative AI Language Models.” They wrote this for The Dallas Morning News.