Artificial Intelligence Is Making Research Faster; Can It Also Help Make Catholic Colleges More Catholic?
Scholars see benefits, problems — and an opportunity.
For Catholic colleges, artificial intelligence (AI) provides both a powerful tool and a reason for caution.
Powerful because of its ability to analyze massive amounts of information and produce useful summaries and descriptions in seconds.
Caution because of its ability to help students cheat on assignments and otherwise turn off their minds.
But most scholars accept that it’s here to stay and are hopeful the pluses will far outweigh the minuses.
“We have to make sure that we’re not making this into a spiritual enemy. It really isn’t,” Ryan Schaaf, an associate professor of education at Notre Dame University of Maryland, told the Register.
“When it comes to artificial intelligence, I see it as becoming really a companion that’s going to help humans see their potential,” he continued. “I think we really have to look at ways in which it can help us to become a partner in our lives, a partner in our faith … and help us to become the best versions of ourselves.”
Right From the Artificial Horse’s Mouth
In addition to speaking with educators, the Register recently asked ChatGPT, the large language model chatbot, about how Catholic colleges are dealing with artificial intelligence.
The powerful online tool, which launched in November 2022, produces plausible-sounding descriptions and even essays almost immediately after being fed a prompt.
The chatbot’s 415-word answer included some general information about artificial intelligence’s role in assisting research that could apply to any college. But it also offered something specific for Catholics in higher education — phrased in the past tense to align with the last time it pulled information on the topic from the internet.
“Given the ethical concerns associated with AI, some Catholic colleges were particularly focused on teaching and researching AI ethics within the framework of Catholic social teaching principles. They were addressing issues related to the impact of AI on society, including issues of justice, fairness and the common good,” ChatGPT said.
Asked a follow-up question — “What ethical concerns do Catholic colleges have about artificial intelligence within the framework of Catholic social teaching principles?” — ChatGPT cited human dignity (including concerns about discrimination, privacy and “dehumanizing labor practices”); the common good (“benefit … the marginalized and vulnerable”); solidarity (“fragmentation of communities”); subsidiarity (“concentration of power and decision-making”); and a few other less-specifically-tied-in points.
In short, ChatGPT said nothing that contradicted what the educators told the Register and a lot that aligned with that they said.
But experts warn that at this point a chatbot is a mimic device, not a thinker, and that it often suffers from what users call “hallucinations” — making up facts that aren’t so to fill in the blanks.
How Is Artificial Intelligence Changing What Colleges Do?
Artificial Intelligence is the ability to use machines driven by computer software to perform tasks and solve problems previously possible only through human intelligence — including generating coherent and grammatical text and producing software programs and mathematical proofs, among other things.
Most Americans are personally affected by AI every day — among many other uses, it’s the mechanism behind AutoCorrect in cellphone text messages, for instance.
But it can also perform tasks that are much more complex. Its ability to almost immediately spot trends amid vast amounts of data aids not just academic research but also health care, banking, insurance and government services, among other things.
At colleges, many instructors are farming out general tasks to AI. That includes writing syllabuses and course descriptions.
Artificial Intelligence can also perform coding.
At Lewis University, a Lasallian Christian Brothers school in Romeoville, Illinois, introductory classes in computer science still ask students to “do things by hand,” but they move more quickly to complex tasks than they previously did, said Safwan Omari, professor of computer science.
Instead of spending a year on a specific skill, he told the Register, students now spend about a month on it.
“Instead of asking students to write 2,000 lines of code, I can ask them to write 15,000. Instead of asking for just a few features, we will ask them for 15 features,” Omari said.
What About Cheating?
Some students under pressure because of procrastination, laziness or lack of time or understanding are turning to ChatGPT to quickly produce essays and papers they didn’t research or type.
Such essays may have built-in errors and biases stemming from the internet sources the AI program draws from — mimicking “a really glib person who has an answer to everything but doesn’t really know very much,” explained H. David Sheets, director of the graduate program in data analytics and a physics professor at Canisius University, a Jesuit school in Buffalo, New York.
But such “writing” usually hangs together and lacks grammatical errors. While it’s plagiarism and violates the code of conduct at colleges, it’s tempting because it’s so easy.
How do you persuade students not to do it?
“That comes back to the relationship you have with your students. Do they understand what they’re doing and why they’re doing it? They’re short-circuiting their own education, after all. You want to practice writing to learn how to do it,” Sheets said.
Eric Wellington, dean of the business school at Neumann University, a liberal arts school in Aston, Pennsylvania, sponsored by the Sisters of St. Francis of Philadelphia, said he limits students’ ability to short-circuit the program by testing them in the classroom.
“How do you assess learning? You make them do it. Have them sit in the room and write out the paper,” Wellington said. “I’m a big fan of direct assessment. … That’s how you should teach: Make them prove it.”
Not So Fast
While most schools are affected by AI, not all are.
Wyoming Catholic College, a small liberal arts school in Lander, Wyoming, that emphasizes Great Books and outdoor life, doesn’t use AI much and isn’t concerned about it, said spokesman Julian Kwasniewski.
The school limits internet access in dorms to email and selected websites for class and doesn’t allow students to have cellphones or other hand-held devices with access to wireless data, so the opportunities to cheat using AI are minimal.
“The college tries to promote a low-tech environment in policy and spirit so students can focus on their studies and bond with fellow students and professors in a human, face-to-face way,” Kwasniewski said.
But for schools that embrace AI, educators realize they have to pay close attention to it.
Manjeet Rege, professor and chairman of the department of software engineering and data science at University of St. Thomas in St. Paul, Minnesota, uses an AI program called “Honor Lock” as a proctor — when students take tests using a computer off site, for instance, they have to stay within camera view, and a computer program produces red flags for possible cheating.
As a scientist, Rege embraces AI because it enables tasks that previously either weren’t possible or were much more time-consuming. But he also wants students to be aware of its limits, particularly when it comes to ethics.
He recalls when he was an undergraduate and a graduate student not having access to Google, and he worries about students over-relying on technology — including, now, AI.
“I often tell this to the students in class: If I ask a question, I say, ‘Don’t pick up your phone. Just keep looking at me and think it through,’” Rege said.
They’ll need that skill in their future jobs.
“If they have to sit through a half-an-hour interview without the help of any of these tools, can they have that conversation? The prospective employer really wants to evaluate if this person has the capability of solving the problem or approaching the problem; or is this person more of a massive consumer of ideas generated by a tool, as opposed to the person generating the idea himself?” Rege said.
Artificial Intelligence Not an End but a Catalyst
The Center for Teaching Excellence at The Catholic University of America in Washington, D.C., has developed a document that includes information about AI tools, ways to incorporate them into classrooms, and tips for avoiding academic dishonesty.
This fall, two assistant professors who teach in the school’s politics department are planning to lead a series of sessions on AI for the school’s faculty.
Those scholars, Jonathan Askonas and Justin Litke, wrote in a paper earlier this year about AI as an opportunity to bring about wide-ranging changes in the way Catholic colleges operate.
On a practical level, they say, preventing cheating through AI may require a more personal approach to assessing knowledge, such as getting to know students as individuals, examining their progress over time, and giving them in-person oral exams.
They also call for what they describe as “skills scaffolding” throughout the university, including an early class on the do’s and don’ts of using AI and the basics of how to perform authentic research using high-quality sources; and “more robust tutoring” early in a student’s time in college so that introductory classes can assume a certain level of competence and not “be designed for the lowest common denominator.”
College today is dominated by written papers and written tests. But Askonas and Litke call for certain politics courses to emphasize “the often-neglected republican skill of rhetoric.”
They also seek more cross-disciplinary courses and better communication as the point of specific courses — specifically, how these courses develop “skills, virtues, habits, and specialized knowledge,” with the goal of making Catholic schools “not only excellent educators of the mind, but also of the heart and the will.”
Properly structuring a course of study starts with a question, they say: “What kind of person do we want to emerge from the program?”
If such changes come about, the result, Askonas told the Register in a telephone interview, could be a renewed emphasis on what Catholic education is supposed to be about — what he called “transformation.”
“Catholic education presumes the compatibility of faith and reason and the unity of all knowledge. The purpose of education ultimately becomes the love of God and the knowledge of God. And I think that this is a technology that demands that we return to that understanding — that the point of a higher education is not achieving a credential or a particular valued skill in the marketplace, which are likely to become obsolete quickly,” Askonas said. “It is in fact becoming a different kind of person.”
Joan Frawley Desmond, a Register senior editor, contributed to this story.
- artificial intelligence
- catholic colleges and universities
- catholic identity in catholic schools