Ethics & Policy
AI: Transforming College Exams, Assessments in Catholic Colleges

JAMAICA — This fall, Father Patrick Flanagan, a professor at St. John’s University, just might revive a classic exam tool — the “blue book” — in response to the proliferation of artificial intelligence.
Father Flanagan said people who attended college in decades past would recall the blue-covered booklet filled with blank but lined pages that students used to write their exam essay answers — with a pen, not a keyboard.
Today’s students, by contrast, can plug into computer programs like ChatGPT, Copilot, or Perplexity to generate essay answers.
Father Flanagan chairs St. John’s Department of Theology and Religious Studies.
He also lectures on moral theology in the marketplace, including “cyber ethics.”
His biggest concern, however, is not a student’s use of AI to cheat, but the potential to rob them of the chance to develop critical thinking skills. Hence, Father Flanagan’s renewed interest in the blue book.
“It’s seemingly the only way to get kids to contemplate an organization of thought, and then pump out a reasonable essay,” Father Flanagan said.
This situation is one of myriad issues faced by colleges and universities — including those affiliated with the Catholic Church — on the ever-expanding landscape of artificial intelligence.
Reactionary Stage
Simply put, AI involves internet-based systems that complete tasks traditionally requiring human intelligence, such as learning and problem-solving, but at breakneck speed.
For example, a chatterbot (commonly called “chatbot”) is a computer program that mimics human communication with voice commands or text chats.
The popular Amazon chatbot, Alexa, receives a question via voice command, like “What is today’s weather forecast?” and answers immediately. Apple’s Siri does likewise.
Like other institutions, Catholic colleges and universities actively promote their programs that teach how to use and create artificial intelligence for various applications, like cybersecurity.
Father Flanagan noted how in the past two years the world has seen a “multiplication of all these different platforms and apps.”
“It seems to be that, at this point, we’re in the reactionary stage,” he said. “But we can be reactors at the same time. We could also be constructors. My whole premise is built on the idea that you have to learn how to deal with technology virtuously.”
Many share that sentiment, including the two most recent popes.
Church Prism
In January, the Vatican issued a note approved by Pope Francis that described how Catholics should view the relationship between artificial and human intelligences.
Titled “Antiqua et nova” (Ancient and new), the note warned about the possibility of AI creating more cost-effective solutions for tasks such as financial accounting, thus eliminating high-paying jobs and relegating “deskilled” people to manual labor.
It also declared a “grave ethical concern” of potential AI-controlled military weapons systems like drones that could operate without human control. The note also reminded that Pope Francis, who died in April, had assailed this military technology as an existential risk “that could threaten the survival of entire regions or even of humanity itself.”
His successor, Pope Leo XIV, recently weighed in on AI, calling it “an exceptional product of human genius,” but still a tool, not to be confused with human intelligence.
In a July 11 message to the Second Annual Rome Conference on Artificial Intelligence, the new pope said the development of AI requires steps to safeguard “the inviolable dignity of each human person and respecting the cultural and spiritual riches and diversity of the world’s peoples.”
RELATED: Pope Francis Urges Communicators to Proceed With Caution on AI
Thorniest Issues
To that end, schools also explore how to address AI’s moral and ethical issues through the prism of Church teachings on care for human dignity and God’s creation. For example, the Catholic University of America in Washington recently announced new bachelor’s and master’s degree programs in artificial intelligence.
Specialized tracks will include AI in healthcare, robotics, and ethical AI design.
Locally, Catholic colleges and universities are engaging AI issues in their various academic programs.
In 2022, Fordham University’s Department of Computer Science launched a doctoral program to explore multiple facets of artificial intelligence.
According to a Fordham press release, this program wrestles with “the thorniest issues of the field, including privacy and responsibility in fields such as artificial intelligence, data science, and cybersecurity.”
Some, like SJU, are joining forces with other institutions and private industry to share research and explore the safe and moral use of this rapidly growing technology.
Father Flanagan noted how, last year, the university joined the AI Alliance (founded by IBM and Meta) and the AAC&U Institute on AI, Pedagogy, and the Curriculum.
The Common Good
St. Francis College in Brooklyn has a new master’s degree program — Cybersecurity and Critical Infrastructure Protection — currently under review with the New York State Education Department.
Gale Gibson-Gayle, vice president of academic affairs for graduate education, said the program explores AI methods that address real-world threats with “technical expertise and ethical responsibility.”
She also described SFC’s partnership with Cornell University, offering certificates in AI for healthcare and cybersecurity.
John Edwards, vice president of academic affairs for undergraduates, also said the college is exploring new undergraduate and graduate offerings in AI.
“Our goal is to position SFC as a leader in ethically grounded, forward-looking AI education,” Edwards said.
Tim Cecere, SFC’s president, said these programs are based on the Franciscan principles of “integrity, compassion, and purpose.”
“At St. Francis College, we view technology as a profound gift from God to man,” Cecere said. “It is the extension of human creativity and intellect meant to serve the common good.”
Ethics & Policy
Santa Fe Ethics Board Discusses Revisions to City Ethics Code

One of the key discussions centered around a motion to dismiss a complaint due to a lack of legal sufficiency, emphasizing the board’s commitment to ensuring that candidates adhere to ethical guidelines during their campaigns. Members expressed the need for candidates to be vigilant about compliance to avoid unnecessary hearings that detract from their campaigning efforts.
The board also explored the possibility of revising the city’s ethics code to address gaps in current regulations. A member raised concerns about the potential for counselors to interfere with city staff, suggesting that clearer rules could help delineate appropriate boundaries. Additionally, the discussion touched on the need for stronger provisions against discrimination, particularly in light of the challenges posed by the current political climate.
The board acknowledged that while the existing ethics code is a solid foundation, there is room for improvement. With upcoming changes in city leadership, members agreed that now is an opportune time to consider these revisions. The conversation underscored the board’s role as an independent body capable of addressing ethical concerns that may not be adequately resolved within the current city structure.
As the board continues to deliberate on these issues, the outcomes of their discussions could significantly impact how ethics are managed in Santa Fe, ensuring that the city remains committed to transparency and accountability in governance.
Ethics & Policy
Universities Bypass Ethics Reviews for AI Synthetic Medical Data

In the rapidly evolving field of medical research, artificial intelligence is reshaping how scientists handle sensitive data, potentially bypassing traditional ethical safeguards. A recent report highlights how several prominent universities are opting out of standard ethics reviews for studies using AI-generated medical data, arguing that such synthetic information poses no risk to real patients. This shift could accelerate innovation but raises questions about oversight in an era where AI tools are becoming indispensable.
Representatives from four major medical research centers, including institutions in the U.S. and Europe, have informed Nature that they’ve waived typical institutional review board (IRB) processes for projects involving these fabricated datasets. The rationale is straightforward: synthetic data, created by algorithms that mimic real patient records without including any identifiable or traceable information, doesn’t involve human subjects in the conventional sense. This allows researchers to train AI models on vast amounts of simulated health records, from imaging scans to genetic profiles, without the delays and paperwork associated with ethics approvals.
The Ethical Gray Zone in AI-Driven Research
Critics, however, warn that this approach might erode the foundational principles of medical ethics, established in the wake of historical abuses like the Tuskegee syphilis study. By sidestepping IRBs, which typically scrutinize potential harms, data privacy, and informed consent, institutions could inadvertently open the door to biases embedded in the AI systems generating the data. For instance, if the algorithms are trained on skewed real-world datasets, the synthetic outputs might perpetuate disparities in healthcare outcomes for underrepresented groups.
Proponents counter that the benefits outweigh these concerns, particularly in fields like drug discovery and personalized medicine, where data scarcity has long been a bottleneck. One researcher quoted in the Nature article emphasized that synthetic data enables rapid prototyping of AI diagnostics, potentially speeding up breakthroughs in areas such as cancer detection or rare disease modeling. Universities like those affiliated with the report are already integrating these methods into their workflows, viewing them as a pragmatic response to regulatory hurdles that can stall projects for months.
Implications for Regulatory Frameworks
This trend is not isolated; it’s part of a broader push to adapt ethics guidelines to AI’s capabilities. In the U.S., the Food and Drug Administration has begun exploring how to regulate AI-generated data in clinical trials, while European bodies under the General Data Protection Regulation (GDPR) are debating whether synthetic datasets truly escape privacy rules. Industry insiders note that companies like Google and IBM are investing heavily in synthetic data generation, seeing it as a way to comply with strict data protection laws without compromising on innovation.
Yet, the lack of uniform standards could lead to inconsistencies. Some experts argue for a hybrid model where synthetic data undergoes a lighter review process, focusing on algorithmic transparency rather than patient rights. As one bioethicist told Nature, “We’re trading one set of risks for another—real patient data breaches for the unknown perils of AI hallucinations in medical simulations.”
Balancing Innovation and Accountability
Looking ahead, this development could transform how medical research is conducted globally. With AI tools becoming more sophisticated, the line between real and synthetic data blurs, promising faster iterations in machine learning models for epidemiology or vaccine development. However, without robust guidelines, there’s a risk of public backlash if errors in synthetic data lead to flawed research outcomes.
Institutions are responding by forming internal committees to self-regulate, but calls for international standards are growing. As the Nature report underscores, the key challenge is ensuring that this shortcut doesn’t undermine trust in science. For industry leaders, the message is clear: embrace AI’s potential, but proceed with caution to maintain the integrity of ethical oversight in an increasingly digital research environment.
Ethics & Policy
Canadian AI Ethics Report Withdrawn Over Fabricated Citations

In a striking irony that underscores the perils of artificial intelligence in academic and policy circles, a comprehensive Canadian government report advocating for ethical AI deployment in education has been exposed for citing over 15 fabricated sources. The document, produced after an 18-month effort by Quebec’s Higher Education Council, aimed to guide educators on responsibly integrating AI tools into classrooms. Instead, it has become a cautionary tale about the very technology it sought to regulate.
Experts, including AI researchers and fact-checkers, uncovered the discrepancies when scrutinizing the report’s bibliography. Many of the cited works, purportedly from reputable journals and authors, simply do not exist—hallmarks of AI-generated hallucinations, where language models invent plausible but nonexistent references. This revelation, detailed in a recent piece by Ars Technica, highlights how even well-intentioned initiatives can falter when relying on unverified AI assistance.
The Hallucination Epidemic in Policy Making
The report’s authors, who remain unnamed in public disclosures, likely turned to AI models like ChatGPT or similar tools to expedite research and drafting. According to the Ars Technica analysis, over a dozen citations pointed to phantom studies on topics such as AI’s impact on student equity and data privacy. This isn’t an isolated incident; a study from ScienceDaily warns that AI’s “black box” nature exacerbates ethical lapses, leaving decisions untraceable and potentially harmful.
Industry insiders point out that such fabrications erode trust in governmental advisories, especially in education where AI is increasingly used for grading, content creation, and personalized learning. The Quebec council has since pulled the report for revisions, but the damage raises questions about accountability in AI-augmented workflows.
Broader Implications for AI Ethics in Academia
Delving deeper, this scandal aligns with findings from a AAUP report on artificial intelligence in higher education, which emphasizes the need for faculty oversight to mitigate risks like algorithmic bias and privacy breaches. Without stringent verification protocols, AI tools can propagate misinformation at scale, as evidenced by the Canadian case.
Moreover, a qualitative study published in Scientific Reports explores ethical issues in AI for foreign language learning, noting that unchecked use could undermine academic integrity. For policymakers and educators, the takeaway is clear: ethical guidelines must include robust human review to prevent AI from fabricating the evidence base itself.
Calls for Reform and Industry Responses
In response, tech firms are under pressure to enhance transparency in their models. A recent Ars Technica story on a Duke University study reveals that professionals who rely on AI often face reputational stigma, fearing judgment for perceived laziness or inaccuracy. This cultural shift is prompting calls for mandatory disclosure of AI involvement in official documents.
Educational bodies worldwide are now reevaluating their approaches. For instance, a report from the Education Commission of the States discusses state-level responses to AI, advocating balanced innovation with ethical safeguards. As AI permeates education, incidents like the Quebec report serve as a wake-up call, urging a hybrid model where human expertise tempers technological efficiency.
Toward a More Vigilant Future
Ultimately, this episode illustrates the double-edged sword of AI: its power to streamline complex tasks is matched by its potential for undetected errors. Industry leaders argue that investing in AI literacy training for researchers and policymakers could prevent future mishaps. With reports like one from Brussels Signal noting a surge in ethical breaches, the path forward demands not just better tools, but a fundamental rethinking of how we integrate them into critical domains like education policy.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries