Authors: Leilani Chester, Jenna Kimball
From crafting the perfect email to ordering a cheeseburger, artificial intelligence (AI) has become a daily occurrence for everyone. Its arrival in higher education has brought polarizing opinions to college campuses worldwide, raising questions about whether it is reshaping learning or destroying critical thought. AI is here, whether students and faculty like it or not.
The term “artificial intelligence” has shifted meaning over time. Originally it was coined in the 1950s to describe machines that could mimic human reasoning. Today, it has become an umbrella term for systems that perform tasks requiring human-like processing, with the most recent innovation being generative AI.
Generative AI tools, like the popular ChatGPT, can produce text, images or code on demand. These systems can dazzle with fluency but also hallucinate, the industry’s term for confidently generating false information.
At the University of Louisiana at Lafayette, AI use is under professor’s discretion. While some are strictly banning, others are grappling with how to respond.
Dr. Rick Swanson, a political science professor, said, “Last spring semester, I was completely against the use of AI at all, and I had a complete prohibition in my syllabus.”
“But over the summer, I realized if we don’t give our students the tools to succeed in their careers and teach them these skills, we’re doing them a disservice. They need to become as fluent in AI as possible to be competitive on the market,” Swanson added.
Success in the job market is a prominent reason students pursue higher education. According to the American Council of Education’s 2023 Freshman Survey, 80% of students said a better job was a very important reason for attending college, 74% said training for a career and 72% said higher earning potential.
Unfortunately for recent graduates, the current job market is tough. According to the Bureau of Labor Statistics, college graduates in 2025 from ages 23 to 27 have an unemployment rate of 4.59% compared to 2019’s rate of 3.25%. Swanson, who advises and teaches pre-law students, said ignoring AI would leave graduating students limited in their job prospects.
“Nobody’s going to hire someone who says, ‘I only use typewriters and landline phones.’ If our graduates aren’t fluent in AI, they won’t get hired, and they won’t get promoted.” Swanson compared it to the way law firms already use AI to draft contracts and analyze cases.
Clinical psychologist and psychology professor Dr. Angela Coreil takes a similar view about AI’s inevitability. She said, “AI is a technology that our students are going to be asked to use when they get out in the world and their jobs.”
“So it’s our job to also train them to use it properly before they’re out in the world with it, to give them some skills. I try to help them use it in ways that are going to help them learn more deeply and become more critical thinkers, rather than using it to do their work for them.”
Some students see the benefit in AI, but still have caveats about the morality of its application. Chalsia Johnson, a junior majoring in criminal justice, said, “I will say, if you’re doing it for the right purpose to like, help you get a head start, or to help you with outline or help you correct your grammar, I would say, yeah. But if you’re using it to, like, literally, just get ahead and just do your work for you, I would say no to it.”
Megan Broussard, a senior majoring in psychology, said, “I think that it can be taken out of hand and people can use it wrongly, but I also do think that there are positive perks like studying or just trying to get more clarification of a concept.”
Coreil incorporates AI into mandatory assignments, requiring students to challenge its mistakes and identify possible hallucinations.“I use that in some assignments in my lower level classes for them to ‘beat the bot’ and try to catch where it’s wrong, try to correct it,” she said.
Hallucinations are a huge concern in real-world applications. Swanson relates this issue to law practice. He said, “Submitting some false documents to the court with a false legal claim or false legal citation is a huge issue that could lead to suspension.”
“If you’re just accepting the answers all the time, that’s going to lower your use of critical thinking, but if you realize you can challenge AI and sometimes be more accurate, you can use it in a way that improves your thinking.”
Concerns about misuse are widespread, due to a gap in use and training.
A 2025 Savanta survey of 1,041 undergraduates found that 98% had used AI tools, most often to generate or edit text. The survey also found that 67% saw AI as a vital tool in modern society, but only 36% said their university had trained them to use it.
Faculty experiences are just as troubling. The Digital Education Council’s 2025 Global Faculty AI Survey found that 86% of instructors expect to use AI in their teaching, though only 6% feel sufficiently trained.
Is AI use going to be education’s downfall, or its latest advancement? Even the creators of the technology admit they cannot fully predict its effect.
As AI becomes more embedded in higher education, faculty will face the same question their students do: are AI tools a threat to guard against, or an instrument to master?
