AI Insights
TerrierGPT Provides BU Community with Free Access to Leading Chatbots | BU Today

New tool is the result of a partnership between the University’s Artificial Intelligence Development Accelerator and Information Services & Technology
TerrierGPT, a new offering from Boston University’s AI Development Accelerator and IS&T, provides BU community members with free access to leading AI chatbots, including OpenAI’s ChatGPT and Google Gemini. Photo by Bob O’Connor
Academics
New tool is the result of a partnership between the University’s Artificial Intelligence Development Accelerator and Information Services & Technology
As the world of artificial intelligence continues to expand, Boston University is offering its own chatbot for staff, faculty, and students: TerrierGPT.
The free generative artificial intelligence (AI) tool is the result of a partnership between BU’s Artificial Intelligence Development Accelerator (AIDA) for Academic and Administrative Excellence, an initiative tasked with exploring how AI technologies can be used in academic settings, and Information Services & Technology (IS&T).
TerrierGPT provides members of the BU community with free access to their choice of leading AI chatbots from OpenAI, Anthropic Claude, Amazon, Meta, and Google. The University’s community members can log into TerrierGPT at terriergpt.bu.edu using their Kerberos credentials.
Why bring chatbots to BU?
First, because AI’s impact on daily life is continuing to expand in profound ways.
“It’s become very obvious that generative AI is going to transform higher education and that the future workforce will need basic AI literacy skills on their résumés, independent of discipline,” says Kenneth Lutchen, BU’s vice president and associate provost for research and AIDA interim executive director.
“Our mission at BU is to create holistic citizens that get a degree in a specific discipline, but have foundational skills and capabilities,” Lutchen says. “We’re trying to ensure they know how to use generative AI in the most constructive, productive way possible for themselves, for their careers, and for society.”
It’s also a matter of equity.
“We saw an unevenness [across BU] with respect to knowing what AI [is capable of], and having access to AI models,” says John Byers, AIDA codirector and a Faculty of Computing & Data Sciences professor of computer science. “At a high level, the main goal of TerrierGPT is to democratize access to AI and give people access to a bunch of the best models out there.”
Finally, your personal and BU-related data are safer within TerrierGPT than outside of it.
“If you went to the free version of ChatGPT, for example, and entered your queries, that data has been sent to OpenAI and they can use it for whatever they want,” says Bob Graham, AIDA interim chief AI officer and IS&T associate vice president of enterprise architecture and applications. “For TerrierGPT, we established protections that mean any data entered into it is BU’s, and companies don’t have any right to that data.”
BU Today spoke to Lutchen, Byers, and Graham to get answers to some of the most commonly asked questions about TerrierGPT.
FAQ
What can TerrierGPT do?
TerrierGPT can be used for a variety of purposes, both personal and academic. For example, students can use TerrierGPT to generate study guides or model test questions, while faculty can use the tool to help create course syllabi or lesson plans. No one is required to use TerrierGPT, however.
Learn more about TerrierGPT use cases here.
Why offer access to different models?
Different models serve different needs. For instance, some models are better at logic and solving coding problems. Offering a variety of models allows users to select their preferred model or the model most appropriate for their tasks.
How should I approach using AI in an academic setting?
BU adopts a “critical embrace” theory toward generative AI in research and academia: that the technology should be utilized with sensible guardrails, while keeping in mind the benefits and limitations of AI. Overall, TerrierGPT should be used to augment, not replace, learning and instructional capabilities. Students and instructors should also be transparent about their use of AI.
Find AIDA’s generative AI guidelines for students and faculty and staff here.
Is TerrierGPT secure?
Yes. Unlike when you use non-BU versions of ChatGPT and other chatbots, the information you enter will not be used as training data.
AIDA’s website notes: “The platform complies with BU’s internal privacy and data protection policies—and none of the data entered is used to train external models. Data uploaded to the platform is only accessible by IS&T personnel and has the same strong privacy protections applicable to all BU enterprise data, such as emails and documents stored on BU’s OneDrive. However please note that TerrierGPT is not approved for use with restricted use data, including HIPAA-regulated information.”
What environmental concerns were factored into bringing this tool to BU?
Building an AI model from scratch requires a tremendous amount of energy. AIDA sought to leverage existing chatbot technology to significantly reduce the amount of resources needed to create TerrierGPT. The additional energy burden of using TerrierGPT is low.
What’s coming next for TerrierGPT and generative AI at BU?
There are future expansions planned for TerrierGPT, and related products, regarding new features and capabilities. BU also plans to launch an online generative AI literacy course for undergraduates, after which students will earn a digital certificate. For faculty, AIDA and the Institute for Excellence in Teaching & Learning are partnering to offer a series of symposiums this fall on using AI for instructional purposes. (Attendees must register for each event.)
Look for more updates from AIDA as the academic year progresses.
Find the answers to more FAQ about TerrierGPT, including information about technical specifications, here.
Explore Related Topics:
AI Insights
AI chatbots and mental health: How to cover the topic responsibly

Artificial intelligence-powered chatbots can provide round-the-clock access to supportive “conversations,” which some people are using as a substitute for interactions with licensed mental health clinicians or friends. But users may develop dependencies on the tools and mistake these transactions for real relationships with people or true therapy. Recent news stories have discussed the dangers of chatbots’ fabricated, supportive nature. In some incidents, people developed AI-related psychosis or were supported in their plans to commit suicide.
What is it about this technology that sucks people in? Who is at risk? How can you report on these conditions sensitively? In this webinar, hear from moderator Karen Blum and an expert panel, including psychiatrists John Torous, M.D. (Beth Israel Deaconess Medical Center); Keith Sakata, M.D. (UC San Francisco), and Mashable Senior Reporter Rebecca Ruiz, to learn more.
Karen Blum
AHCJ Health Beat Leader for Health IT
Karen Blum is AHCJ’s health beat leader for health IT. She’s an independent health and science journalist, based in the Baltimore area. She has written for publications such as the Baltimore Sun, Pharmacy Practice News, Clinical Oncology News, Clinical Laboratory News, Cancer Today, CURE, AARP.org, General Surgery News and Infectious Disease Special Edition; covered numerous medical conferences for trade magazines and news services; and written many profiles and articles on medical and science research as well as trends in health care and health IT. She is a member of the American Society of Journalists and Authors (ASJA) and chairs its Virtual Education Committee; and a member of the National Association of Science Writers (NASW) and its freelance committee.

Rebecca Ruiz
Senior reporter, Mashable
Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca’s experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a master’s degree from UC Berkeley’s Graduate School of Journalism.

Keith Sakata, M.D.
Psychiatry resident, UC San Francisco
Keith Sakata, M.D., is a psychiatry resident at the University of California, San Francisco, where he founded the Mental Health Innovation and Digital Hub (MINDHub) to advance AI-enabled care delivery. He provides treatment and psychotherapy across outpatient and specialty clinics, with a focus on dual diagnosis, PTSD, OCD, pain, and addiction.
Dr. Sakata previously trained in internal medicine at Stanford Health Care and co-founded Skript, a diagnostic training platform adopted by UCSF and Stanford that improved medical education outcomes during the COVID-19 pandemic. He currently serves as Clinical Lead at Sunflower, an addiction recovery startup. He also helps and advises startups working to improve access in mental health: including Two Chairs, and Circuit Breaker Labs, which is providing a safety layer for AI tools in mental health care.
His professional interests bridge psychiatry, neuroscience, and digital innovation. Dr. Sakata holds a B.S. in Neurobiology from UC Irvine and earned his M.D. from UCSF.

John Torous, M.D., MBI
Director, Digital Psychiatry, Beth Israel Deaconess Medical Center
John Torous, M.D., MBI, is director of the digital psychiatry division in the Department of Psychiatry at Beth Israel Deaconess Medical Center (BIDMC), a Harvard Medical School-affiliated teaching hospital, where he also serves as a staff psychiatrist and associate professor. He has a background in electrical engineering and computer sciences and received an undergraduate degree in the field from UC Berkeley before attending medical school at UC San Diego. He completed his psychiatry residency, fellowship in clinical informatics and master’s degree in biomedical informatics at Harvard.
Torous is active in investigating the potential of mobile mental health technologies for psychiatry and his team supports mindapps.org as the largest database of mental health apps, the mindLAMP technology platform for scalable digital phenotyping and intervention, and the Digital Navigator program to promote digital equity and access. Torous has published over 300 peer-reviewed articles and five book chapters on the topic. He directs the Digital Psychiatry Clinic at BIDMC, which seeks to improve access to and quality of mental health care through augmenting treatment with digital innovations.
Torous serves as editor-in-chief for the journal JMIR Mental Health, web editor for JAMA Psychiatry, and a member of various American Psychiatric Association committees.
AI Insights
AI’s Baby Bonus? | American Enterprise Institute

It seems humanity is running out of children faster than expected. Fertility rates are collapsing around the world, often decades ahead of United Nations projections. Turkey’s fell to 1.48 last year—a level the UN thought would not arrive until 2100—while Bogotá’s is now below Tokyo’s. Even India, once assumed to prop up global demographics, has dipped under replacement. According to a new piece in The Economist, the world’s population, once projected to crest at 10.3 billion in 2084, may instead peak in the 2050s below nine billion before declining. (Among those experts mentioned, by the way, is Jesús Fernández-Villaverde, an economist at the University of Pennsylvania and visiting AEI scholar.)
From “Humanity will shrink, far sooner than you think” in the most recent issue: “At that point, the world’s population will start to shrink, something it has not done since the 14th century, when the Black Death wiped out perhaps a fifth of humanity.”
This demographic crunch has defied policymaker efforts. Child allowances, flexible work schemes, and subsidized daycare have barely budged birth rates. For its part, the UN continues to assume fertility will stabilize or rebound. But a demographer quoted by the magazine calls that “wishful thinking,” and the opinion is hardly an outlier.
See if you find the UN assumption persuasive:
It is indeed possible to imagine that fertility might recover in some countries. It has done so before, rising in the early 2000s in the United States and much of northern Europe as women who had delayed having children got round to it. But it is far from clear that the world is destined to follow this example, and anyway, birth rates in most of the places that seemed fecund are declining again. They have fallen by a fifth in Nordic countries since 2010.
John Wilmoth of the United Nations Population Division explains one rationale for the idea that fertility rates will rebound: “an expectation of continuing social progress towards gender equality and women’s empowerment”. If the harm to women’s careers and finances that comes from having children were erased, fertility might rise. But the record of women’s empowerment thus far around the world is that it leads to lower fertility rates. It is not “an air-tight case”, concedes Mr Wilmoth.
Against this bleak backdrop, technology may be the only credible source of hope. Zoom boss Eric Yuan recently joined Bill Gates, Nvidia’s Jensen Huang, and JPMorgan’s Jamie Dimon in predicting shorter workweeks as advances in artificial intelligence boost worker productivity. The optimistic scenario goes like this: As digital assistants and code-writing bots shoulder more of the office load, employees reclaim hours for home life. Robot nannies and AI tutors lighten the costs and stresses of parenting, especially for dual-income households.
History hints at what could follow. Before the Industrial Revolution, wealth and fertility went hand-in-hand. That relationship flipped when economies modernized. Education became compulsory, child labor fell out of favor, and middle- and upper-class families invested heavily in fewer children’s education and well-being.
But today, wealthier Americans are having more children, treating them as the ultimate luxury good. As AI-driven abundance spreads more broadly, perhaps resulting in the shorter workweeks those CEOs are talking about, larger families may once again be considered an attainable aspiration for regular folks rather than an elite indulgence. (Fingers crossed, given this recent analysis from JPM: “The vast sums being spent on AI suggest that investors believe these productivity gains will ultimately materialize, but we suspect many of them have not yet done so.”)
Indeed, even a modest “baby bonus” from technology would be profound. Governments are running out of levers to pull, dials to turn, and buttons to press. AI-powered productivity may not just be the best bet for growth, it could be the only realistic chance of nudging humanity away from demographic decline. This is something for governments to think hard about when deciding how to regulate this fast-evolving technology.
AI Insights
How to combat AI in college classrooms

Blue books are back in college classrooms! Remember those? Professors are embracing the old exam booklets again as a way to combat AI cheating.
Our guest, Clay Shirky, who studies AI and technology at NYU, argues that we may need to “go medieval” with education and return to the days of oral exams. Other ideas being floated include more use of the Socratic method, calling on students in class and extended office hours to ensure students are absorbing the material. But are those really feasible for large universities?
Eighty percent of students use AI to help with their coursework, but they say they aren’t all outsourcing it, they are using chatbots as tutors, to quiz them and to brainstorm.
This episode, do we need to completely rethink how colleges educate students? How can we inspire students to be thoughtful AI users and not lazy ones? We’ll talk about how artificial intelligence is shaping learning and how universities are grappling with it.
Guest:
Clay Shirky – Vice Provost of AI and Technology in Education at New York University
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries