[473 Cover] University Students in the AI Era:
Balancing Reliance and Responsibility for a Sustainable Education
From writing assignments and preparing presentations to studying for exams, it is now difficult to find students who complete their work without AI tools. The convenience of AI, which can draft within minutes and organize complex concepts, has become indispensable for time-pressed students. However, behind this convenience lies a troubling reality. A growing dependence on AI has left many students struggling to write even a single line without it. At the same time, the phenomenon of “hallucination,” or plausible but false information, is raising serious concerns about the decline in the quality of learning. The global ChatGPT outage on June 10th served as a stark reminder. For students who had heavily relied on AI to prepare for exams, it was akin to losing all their tools on the very day of final examinations. The incident clearly revealed the dangers of overreliance. In this article, The Dongguk Post seeks to assess how AI is transforming the culture of university learning and to explore the opportunities and risks that coexist within it.
AI usage evident across all stages of learning
To what extent are university students actually using AI in their studies? The answer seems to suggest that AI is no longer a peripheral tool but rather a central part of everyday academic practice. From the earliest stages of drafting assignments and brainstorming ideas to more advanced tasks such as translation, text refinement, image and video generation, and the design of professional-looking presentations, AI has come to actively permeate nearly every aspect of learning. Many students even go so far as to use AI in place of professors, whose availability is often restricted to limited office hours. They point out that, unlike human instructors, AI can respond instantly, at any time of the day or night, and in an informal tone that feels approachable, making it an appealing and convenient substitute. For international students in particular, AI has become nothing short of essential. It is the primary tool that enables them to overcome daunting language barriers and continue participating fully in their courses. Exchange students who are required to submit assignments in Korean frequently admit that “without AI’s translation and text editing functions, it would be nearly impossible to keep up with classes,” thereby revealing just how much they rely on the technology.
Overall, students tend to evaluate the use of AI in overwhelmingly positive terms. According to a 2024 international survey conducted by the Digital Education Council, a striking 86% of students worldwide reported that they use AI regularly for their studies, and more than half stated that it has significantly improved their learning efficiency. Many explained in detail that AI not only enhances academic productivity but also creates opportunities for entirely new modes of learning that were previously unimaginable. Reflecting these widespread patterns of use, education oriented AI services have expanded rapidly in response to student demand. Some platforms now offer free premium upgrades exclusively for students, while others aggressively promote discount programs as a way of capturing and retaining the student market. Observers agree that this trend shows no signs of slowing down and, on the contrary, is expected to accelerate even further in the coming years. Beyond efficiency gains, proponents also highlight that students are cultivating new abilities, such as the capacity to quickly analyze and process vast amounts of data, to restructure information for specific purposes, and to adapt flexibly to the collaborative demands of online learning environments.
Concerns grow over declining critical thinking
However, educators argue that these changes cannot be viewed only in a positive light, as the implications for students’ intellectual growth are complex and far-reaching. One major concern repeatedly raised is that the very concept of “learning” is undergoing a profound transformation, not merely a surface level adjustment. The central issue lies in the fact that learning itself, once regarded as a slow and deliberate process, is now being redefined in ways that many find unsettling. Scholars continue to differ in their assessments of how this ongoing shift will ultimately affect students’ ability to think critically and act creatively. Some point out with alarm that “students who have grown accustomed to simply accepting AI-generated answers are showing a marked and noticeable decline in critical thinking skills, often failing to question the accuracy or validity of the information provided.”
In the past, learning was commonly understood as a process of internalizing knowledge through repeated trial and error, reflection, and gradual improvement. Now, by contrast, it has shifted to the seemingly effortless act of asking AI a question and receiving an immediate, neatly packaged answer. Critics warn that bypassing the necessary verification and reasoning process not only risks weakening creativity but may erode essential skills such as independent judgement and long-term problem-solving.
AI hallucinations still challenge academic integrity
A particularly critical obstacle is the chronic problem of “hallucination.” This refers to instances where AI generates information that looks credible but is false, a flaw rooted in its learning process. Generative AI is trained on massive datasets drawn from the internet, books, academic papers, and news articles, learning patterns of words and sentences. However, because it does not distinguish between accurate and inaccurate information, it tends to output repeated errors as if they were more reliable answers. Instead of “judging” factual accuracy, AI is designed to produce probabilistically plausible responses, which often results in fabricated citations or inaccurate sources.
In the United States, one lawyer faced controversy after submitting a legal brief that cited non-existent cases generated by ChatGPT. In another instance, an expert report prepared for a court relied on inaccurate references produced by Anthropic’s Claude, prompting a judge to describe it as a “very serious problem.” On university campuses, similar cases have surfaced where students unintentionally incorporated fabricated citations into their assignments and were penalized.
The concern is that students’ increasing reliance on such false or misleading information is not merely a minor inconvenience but is actively distorting the very framework of knowledge upon which education is built. Studies provide sobering evidence: the hallucination rate of ChatGPT-4o has been measured at around 20%, already a troubling figure, while Google’s Gemini Advanced has recorded an alarmingly high 76.7%, raising even greater concerns about accuracy and trustworthiness.
These numbers vividly highlight the dangers that arise when AI-generated information is accepted uncritically, without skepticism or verification. Left unchecked, such practices may erode students’ ability to distinguish fact from fabrication and could slowly undermine the integrity of academic learning amid this shift toward reliance on AI. At the same time, research offers cautious optimism, suggesting that newer models, when paired with carefully constructed prompts and critical human oversight, can significantly reduce error rates. This points to a crucial conclusion: the urgent need to equip students with robust AI literacy education. Without explicit training on how to evaluate, question, and cross-check AI outputs, students risk adopting flawed information as truth, thereby weakening the intellectual foundations of their learning.
From bans to incorporation: Rethinking AI in universities
Despite such concerns, AI continues to be hailed as a symbol of innovation and is rapidly spreading across education and society. Universities, in particular, are now shifting their policies from outright bans on AI toward guiding students in its proper and responsible use. This represents a significant change in attitude: where AI tools were once viewed primarily as threats to academic honesty, they are now being reimagined as instruments that can be integrated into the learning process under carefully designed rules.
Several Korean universities have already joined this trend, signaling that the shift is not a distant possibility but a reality already underway. For instance, Jeonbuk National University has established guidelines to encourage the responsible and constructive use of generative AI such as ChatGPT in both teaching and learning activities. Chung Ang University has also introduced its own comprehensive set of guidelines, recommending that both academic staff and students explicitly state in course syllabi whether AI will be used, in what specific ways, and under what standards of citation. By embedding these details into syllabi, the university aims to promote transparency while ensuring that all participants in the classroom share a clear understanding of expectations.
At Dongguk University, similar efforts are underway with a strong emphasis on balance. In August 2023, the Office of Academic Affairs released official guidelines on ChatGPT use, positioning it not as a replacement for traditional education but as a tool to supplement classroom teaching and optimize student learning while carefully addressing potential risks. Professors are instructed to clarify their policies in course syllabi, inform students of acceptable uses, and design diverse evaluation methods to minimize opportunities for misconduct. Students, in turn, are required to comply with class-specific policies, verify the accuracy of AI generated answers, cite sources properly, and remain mindful of the dangers of overreliance. The guidelines stress that while ChatGPT can serve as a valuable tutor and resource, unchecked use may compromise both academic integrity and the cultivation of critical thinking skills.
Similar measures are being adopted abroad, underscoring that the issue is global rather than confined to Korea. At Columbia University, for example, students are not only required to verify the accuracy of AI-generated results but also to clearly disclose their use in any submitted work. The university enforces strict rules that prohibit the use of AI in assignments or examinations without explicit permission from instructors, and additional concerns such as intellectual property rights and algorithmic bias are explicitly considered. Likewise, the University of Memphis Fogelman College of Business & Economics mandates that students must indicate whenever generative AI is employed in assignments or projects, with failure to do so treated as a form of academic misconduct.
Taken together, such examples show that both domestic and international universities now regard information verification training and the strengthening of academic ethics education as essential and urgent tasks. Beyond simple debates over prohibition or allowance, institutions are increasingly moving toward incorporation, embedding digital literacy education into both regular curricula and extracurricular activities. The goal is to ensure that students do not merely learn how to use AI but also develop the lifelong habit of selecting, questioning, and verifying information in real-world contexts where accuracy and judgment matter most.
Generative AI has already become a central pillar of the university learning environment. While it shortens the time for assignments and exam preparation and boosts productivity, excessive dependence risks weakening critical thinking and creativity, and issues like hallucination threaten reliability. Universities and students must therefore regard AI not as an infallible source of answers but as a tool for reference and collaboration, cultivating habits of verification and ethical use. In doing so, universities can foster a “balanced learning culture,” while students can strengthen their capacity for independent learning. As the first generation to coexist with AI, students are tasked with leveraging its benefits while addressing its limitations to build a truly balanced culture of learning.