By Adam Elias
For the past several months, colleges and universities across the globe have been reckoning with the realization that there are learning systems, and there are learning systems. The difference between the two is that a learning system
is where you can find your syllabus and submit your assignments, while learning systems are expected to eventually rise up and take over the planet. At Bellarmine, one is called “Moodle,” while the other is potentially feudal.
Education’s reckoning with generative AI is going to be messy and laborious, but asking, “How do we beat it?” is entirely the wrong question.
The concept of technologies that can learn is not new. The coining of the term “artificial intelligence,” or “AI,” is commonly attributed to John McCarthy of Dartmouth College back in the 1950s. So we’ve had a pretty
big heads-up that this was coming. Since McCarthy gave AI its name, the six decades of development of synthetic “neural networks” in computers, the evolution of hardware and circuitry, and the exponential rise of data and the speed
of its processing have all converged to the present point at which we’re all talking about AI, because it’s here, it’s affecting our lives in real ways, and it’s not going away.
The truth is, artificial Intelligence has already taken over the world, at least in terms of interest and intrigue. We’ve all heard about it, read about it and used it, and have been subjected to it without knowing. Industries are racing to tap
into it. How long before AI ascends to the throne of planet Earth is anyone’s guess, but for now, civilization and our ways are its playthings.
This is largely because anyone can join in the game. While AI is not new, its newfound accessibility absolutely is—particularly for generative AI, which allows you to will new forms of content and media into existence simply by asking.
Anyone can give the magic lamp a rub. It’s not just interest from afar; it’s “Here, see what you can make with it!” ChatGPT, the most widely known AI chatbot, is quickly becoming ubiquitous, and for good reason: It’s like
a remote personal assistant with a hive mind. Whether you want ideas, written content, instruction, feedback, or direction, ChatGPT can give it to you in seconds.
The generative AI boom goes well beyond ChatGPT. Midjourney and other AI art generators are producing visual content so compelling and authentic that actual human graphic designers are feeling threatened and are even considering throwing in the towel,
seeing their career prospects dwindle as one of the first sectors heavily overtaken by AI.
Dozens of AI-driven audio synthesizers and voice cloning platforms are freely available, and my, are they capable. YouTube is packed with deepfake videos showing incredibly lifelike depictions of people—even those long since departed—doing
or saying things those real people did not actually do or say. We’re barely home from the hospital with our new robot baby, and we’re already well into the generative AI revolution.
With just a few hours of time, everyday resources, and minimal technical capability—the generative AI platforms that are now widely available—here’s a smattering of things I can do:
- With less than a minute of an audio sample of your speech, I can clone your voice and use it to say whatever I want, with accuracy and precision that could fool your closest friends and family.
- I can craft compelling and lifelike images to fabricate reality. I can send you high-resolution pictures of something that did not happen without any kind of Photoshop expertise.
- I can ask an AI to generate unique human-sounding essays and submit them as my own work. I can ask that same AI to tweak the essay in subtle ways that make it read even more authentically in my own voice and perspective. I could crank out dozens of
pages in minutes.
I don’t know about you, but any “wow” factor of those activities quickly gives way to a gnawing “yikes” factor. The dangers of this technology are obvious, and the proverbial cat is not going back in its bag; these capabilities
are now snugly in the toolbelt of pretty much anyone on the planet, and they’re only going to become more powerful, refined and accessible.
There are massive implications for every industry in the world, and for those of us who attend, work at, or maintain close ties with a college or university, this is a particularly big deal.
Higher education brought low?
As a career-long higher education professional who has spent the past 15 years focused on technology and innovation in this sector, I’m still floored by the meteoric rise of generative AI. Only a few years ago, the concept of online education was
widely perceived as a terribly disruptive and existential threat to the entire premise of the university. (Such simpler times those were.)
Sure, online learning has changed higher ed, but now we’re wondering less about how institutions deliver their services and more about the stability of basic pieces of longstanding academic bedrock. With AI in the picture, what is knowledge? What
is art? What is plagiarism? What is a source? Who—or what—is an author? Is this all hyperbole? (I think not.)
A simple example illustrates many of these questions. Colleges and universities have long leaned heavily on plagiarism-detection tools like the widely used TurnItIn. Such products routinely check student essays against massive libraries of data, both
to detect and to deter plagiarism. Bellarmine has licensed TurnItIn and similar services for many years. This technology has provided some peace of mind to faculty that their students are writing the essays they submit.
Into this routine and uneventful landscape of college writing and assessment strolls generative AI. Among the many talents of ChatGPT is that it can craft essays—and very good ones at that—allowing for iterative enhancements based on suggestions
or additional input. I can ask ChatGPT to write an essay on a given topic, with a specified word-count, from a particular perspective. ChatGPT will give me what I seek, and if it’s not to my liking, I can ask it for edits and alterations. Maybe
I throw in a personal anecdote and a few hand-written lines or paragraphs to customize it around my own knowledge and experiences—aspects ChatGPT would not know or be able to access.
In less than an hour, I could produce a standard five- to seven-page essay that would be largely indistinguishable from an essay written by the average human on their own. Without glaring errors or an in-depth comparative analysis of that essay against
previous essays I’d written, there would be few, if any, bases for claiming I hadn’t written the paper on my own.
In the context of AI-generated essays, the challenge is that much of the output is, in fact, unique. Technically, the AI-written essay is not plagiarized in the traditional sense. The AI has trained itself on the human art and technique of writing essays
by “reading” far more essays than any flesh-and-blood person could ever hope to do in a lifetime, and so it can write a strikingly good one in virtually any kind of style; it’s not merely piecing together existing content from the
various corners of the web.
Pausing here for a moment, I can’t help but wonder about the moral assumptions many of us bring to this topic. I have no doubt that when manual typewriters began to be supplanted by personal computers, no shortage of writers and authors saw the
rise of digital technologies as a kind of cheating: easy editing and formatting, later additions such as built-in spellchecking and thesauruses, and especially the most recent autocomplete capabilities built into Microsoft Word.
With AI in the picture, what is knowledge? What is art? What is plagiarism? What is a source? Who—or what—is an author?
In the modern context, where do we delineate the parameters of authorship from technological assistance? The metaphor may not be apples-to-apples, but one could certainly argue that generative AI is simply a platform for a new kind of content generation.
Maybe this is how writing will be done in the near future. Is it better? That depends on the definition of “better.” Perhaps the data-processing power of AI will eventually be able to generate content that is “better” (by
some definitions) or of a higher quality than what a human would have created. What about the ethical and moral questions? When would it be OK to use AI in this way? Is it more OK when it’s more commonly accepted in a given industry? When
does it matter?
Content dystopia
Those questions are difficult to tackle, even in academia. Over the past several months, I’ve been involved in numerous conversations with concerned faculty and staff, where the gist has boiled down to, “How do we catch it?” or “How
do we beat it?”
I’ve seen Terminator and The Matrix. I even sat through Tron. I know how that usually goes for human civilization.
And it’s true here: Resistance is futile. There is no winning counterstrategy, as much as I’d like to see Dr. Jon Blandford riding out to war against cyborg ChatGPT essays.
As the models behind the AI are trained on more and more data—while we gape at our computer screens wondering how in the world we’re going to cope—AI is getting better at producing output that appears distinctly and uniquely human. Various
“AI detectors” have arisen, but considering the untold wealth that is being poured into developing actual AI technologies—investments that are only going to increase—engaging in an intellectual arms race with AI is not a winning
endeavor.
From the perspective of academia, it’s difficult not to feel some despair here. We’re teetering on a near-total undermining of a foundational student assessment strategy: the essay. And that’s a doozy, isn’t it? Think about the
number of hours of your life you spent writing essays in high school and college. For me, it’s probably somewhere in the neighborhood of an entire month of my life, grinding away at research, hashing out drafts, editing and fine-tuning. If I
were an undergrad today, would it be tempting to let ChatGPT do the heavy lifting, so that I could spend that time on something more meaningful? Of course it would. Am I advocating for shortcuts? In some cases, sure. Am I advocating for graduating
students without the skills that essays would assess? Absolutely not. Do I think there’s another way forward? Absolutely, yes.
The future is still bright
This is not a hopeless struggle—or even a dreary one. In fact, there has never been a more exciting time to be in college, to work at or support one, or even just to learn. Education’s reckoning with generative AI is going to be messy and
laborious, but asking, “How do we beat it?” is entirely the wrong question.
Based on my own experiences with generative AI over the past few months, I’m firmly in the camp of the hopeful. Sure, we have plenty of work ahead of us to reinvent higher education in response—and anticipation of—these new and powerful
technologies, but have you considered what this stuff can do?
The advancements enabled by AI across all industries should be exciting to all of us. I’ve read about AI-boosted cancer detection that successfully detects tumors before they even form. Read that sentence again. Game-changer. Complex and millennia-old
problems that humans have been unable to crack may be solved in the next few decades, thanks in part to the good people of Earth who will use AI for the better.
Five career fields with AI in their futures
I’ve read about the possibilities for digital experiences to one day be fully generative. For example, imagine a video game where you’d quite literally enter and interact with the land of your dreams. When you consider the separate components
of generative AI discussed here—dynamically-generated sights, sounds and scripts—that concept is not so far-fetched at all. Imagine all the important new corresponding pathways for students in medicine, healthcare, hospitality, business,
ethics, public policy, software and data engineering, and so many more.
And as for education, AI should not be viewed as a looming dismantler but as the ultimate tutor. I discovered this on my own, and it has truly opened my eyes to the potential for ChatGPT to support individual learning and development. For the past several
months, I’ve been learning the Java programming language, and at a point of great frustration in practice, I found I could feed my code into ChatGPT, ask, “What’s wrong with this code?” and receive detailed feedback both general
(“You might think about…”) and specific (“There’s an issue on line __ of code”), in a friendly human style and voice. Responses are immediate, completely customized to my question, and supported by the near-limitless
knowledge base of the Internet. ChatGPT has been an immensely effective learning support in my own studies, and that’s just a single and early use, discovered by chance.
At the same time, we must remember that these latest public-facing manifestations of AI are still works in progress. In another attempt with ChatGPT, I tried to offload a bunch of work-related research by asking it to fetch a list of universities similar
to Bellarmine that offered a specific graduate program, with enrollment numbers, faith affiliation, and links to their program websites. ChatGPT instantly returned everything I asked for in fine detail, and I was thrilled. I’d saved hours of
work time!
Several days later, I circled back to the research and clicked one of the links. And then another, and another. None of them worked. As I worked from there to verify the information it had given me, I slowly realized that ChatGPT had made up everything.
The generative AI had indeed generated content, and masterfully; unfortunately, it was simply fictitious.
Surprisingly, ChatGPT has also come up short in conducting simple math calculations. In an effort to save myself mundane taps on a calculator, I fed the AI the very simple request of finding the average of a bunch of numbers. Over and over again, it walked
me through how to do the math—quite accurately, no doubt—but then failed spectacularly in doing that actual math.
So, take all my “This is the future!” and “The AI will conqueror all!” ravings with a grain of salt. Effective use of generative AI requires that you understand its shortcomings, though I expect that list to shrink soon. Surely,
basic math skills are a prerequisite for planet overlordship.
So what now?
If you’re a student, consider how to use generative AI in ways that support your learning rather than mindlessly replacing it. While ChatGPT can write your essay on human behavior research, it can’t impart to you an understanding of how to
work with a team, how to think critically, how to effectively treat a sprained ankle, or how to manage a business. Bellarmine provides the opportunity to learn these life and career skills, and shortchanging the process only harms you and your career
ambitions.
At the same time, Bellarmine’s faculty are responsive to our world and eager to be innovative in their fields. Since they’re responsible for providing opportunities for their students to reach learning outcomes in Bellarmine courses and programs,
you can expect our faculty to adapt their teaching to the new AI-infused world in which we live. I’m convening a group of faculty and staff in the fall—the AI Collective—to intentionally think and talk through AI-presented challenges
and opportunities for the university.
Students are likely to see shifts in teaching and learning activities and assignments that reflect the presence of generative AI on the public stage. In the future, assessments may take different forms—perhaps more formative, where students provide
more drafts that demonstrate their thought processes and the iterative development of ideas and arguments.
Competency-based education likely stands to see more widespread adoption in the current landscape, too; students may be asked to display their knowledge or skills in new, different or nontraditional ways, as faculty put less stock in essays or summative
assessments altogether. This is all still new and evolving, but we’ll do our best to take steps forward that ensure Bellarmine continues to offer the highest quality education in the region.
Speaking of the near future, as a technology guy but most importantly as a parent, I can’t help but wonder about what kind of future we’re creating when we make these present choices about how to use or respond to AI. We’re surrounded
by grey frontiers, and these are exactly the kinds of topics to which we should turn to those like Bellarmine’s own Dr. Kate Johnson, associate professor of Philosophy, or Fr. John Pozhathuparambil, director of Campus Ministry, to help us consider
the ethical, moral and philosophical implications. If you’ve seen the Netflix series Black Mirror (and if not, you absolutely should!), you’ll recognize the slippery slope from “Hey, that’s cool!” to “OMG,
we’ve irreversibly ruined human culture.”
As we plunge forward into a tomorrow that is increasingly focused and reliant on artificial intelligence, never has there been a greater need for Bellarmine’s commitment to fostering the heart, mind and spirit of our community. Because at the rate
at which change happens at this point in history, who knows what the next year will bring?
Other than the AIs, of course. The AIs definitely know.
Adam Elias is Bellarmine’s director of Innovative Learning Systems, a role in which he is charged with enhancing the culture of professional growth and development at Bellarmine while also promoting excellence and innovation in teaching. He provides leadership for the Faculty Development Center and the university’s distance education initiatives.