
AI won’t transform the classroom, but it will change how we learn
By Jonathan Barazzutti, October 1 2025—
Artificial intelligence (AI) has become an increasingly dominant technology in the modern world, and especially in academia. But while many have suggested it will completely transform the way the world operates, the reality may be more bland.
On Nov. 30, 2022, something revolutionary happened: OpenAI’s ChatGPT was first released.
While similar chatbots had been present for years prior, ChatGPT was the first to be widely adopted by people and used for direct application. Since then, public discourse on AI has exploded, as has the number of AI systems. AI generation has reached heights that would have been unimaginable a decade ago, from having basic conversations to generating images and even music.
Now, anyone can utilize AI systems for a wide range of tasks, including writing emails, summarizing lengthy texts and generating ideas. However, this poses a unique problem for educational institutions. Universities were suddenly forced to write policies pertaining to the use of AI, as it was an easy means through which students could write papers or related pieces of content without having to think for themselves.
Every professor has their own view of AI: some altogether ban it from use in their classrooms while others actively embrace it and integrate it into their teaching. Some of my economics professors have even suggested or encouraged the use of AI to help augment writing tasks in papers or related content.
On a broader societal level, many see AI as being close to bringing about an utter transformation of society, for better or worse. This view is best encapsulated in the AI 2027 scenario, written by several AI researchers and published a few months ago. The scenario predicts that as AI scientists increasingly use AI to accelerate AI development, we will achieve artificial general intelligence by 2027 and reach a singularity of AI, which will exponentially improve upon itself.
The scenario portrays a future where the risk posed by AI systems being misaligned with the interests of humanity is significant.
It presents two scenarios: if we don’t slow down development, the misalignment will eventually lead to humanity’s extinction, or if we do slow down development and ensure alignment is reached, we could eventually achieve a technological utopia.
The AI 2027 scenario poses impending, rapid and monumental changes to our society, but in reality, the development of AI has not lived up to the hype or doom that these researchers have portrayed, not in 2025 and assuredly not in 2027.
Many of the problems that have existed with older AI models, such as hallucinations, have not gone away with the development of new AI models. Contrary to the assertions of some, we are not seeing increasing returns to AI investment in terms of the quality of AI output, but actually diminishing returns in many cases, encapsulated in the overhyped release of OpenAI’s GPT-5, coming out over two years after the previous model, GPT-4.
Even with the speed at which models like GPT-5 can perform tasks such as finding sources on a particular topic or doing mathematical problems, the persistence of hallucination means that AI can never operate autonomously from a person. Information that AI generates must be constantly verified by an independent source.
This means that AI can’t actually replace people’s jobs, because even as it increases the speed at which certain tasks can be performed, its tendency to hallucinate means it often has to be carefully and constantly supervised — humans learn from their mistakes, but AI constantly cuts corners and instructions, making its mistakes qualitatively different than a human performing an identical task.
Despite a significant rise in the number of organizations adopting AI technologies in recent years, an unpublished working paper released earlier this year by economists Anders Humlum and Emilie Vestergaard found no significant impact of AI chatbots on productivity, earnings or recorded hours in any occupation. A 2023 report by the Organization for Economic Cooperation and Development (OECD) similarly details that empirical research shows no significant effect of AI adoption on employment across countries with different levels of AI exposure.
As such, based on recent economic and technological trends, we can infer that AI will similarly not have drastic effects on academia. AI is not going to take away the jobs of professors, and it cannot replace the work of researchers.
But that doesn’t mean AI is useless or has nothing to offer people, including students and professors. Indeed, one of the best ways to quickly grasp any complex concept one learns in the classroom is through a conversation with a large language model such as ChatGPT, which can help clarify what you might be missing about a particular topic.
While you will often have to verify what it’s saying, and it may make mistakes which you have to be attentive to, this way of learning can be highly positive for a student trying to succeed in their classes.
AI can also be extremely helpful in doing research. For example, research and search functions in large language models enable the rapid discovery of information. Due to hallucination problems I mentioned earlier, this information always has to be verified, but this method of finding sources can be significantly quicker than simply Googling around. Furthermore, ChatGPT can be beneficial in finding a piece of information within a wall of text, such as a particular statistic in a report.
Similarly, AI can have positive applications in the classroom. I’ve seen professors in macroeconomics use AI to generate outlines for reports that we would write over the course of the semester. Other professors, such as Dr. Steve DiPaola at Simon Fraser University, are employing AI assistants in their classrooms — in Dr. DiPaola’s case, a 3D AI teaching assistant named Kia, which is assisting his class on the history and ethical challenges of using AI.
None of this is to suggest that AI cannot be abused or doesn’t come with tradeoffs. It is certainly possible to use AI in a way that inhibits learning, such as a student using it to write an essay for school in place of thinking for themselves.
It can even create risks for individuals or groups underrepresented in datasets.
For example, an AI-driven pulse oximeter developed to assess blood oxygen levels in patients tended to overestimate blood oxygen levels in Black patients, leading to the undertreatment of their hypoxia.
We should always be aware of the risks and limitations of AI. But ultimately, those who can utilize AI while recognizing its limitations will be ahead of those who refuse to use it.
AI is still a new and evolving technology. While I don’t think its implications will be as grand as much of the hype implies, new norms will have to be developed surrounding its use, particularly in academic contexts. But as these norms evolve, we shouldn’t reflexively reject AI when it can be of use, nor should we see it as a transformative boon.
This article is a part of our Opinions section and does not necessarily reflect the views of the Gauntlet editorial board.
