Connect with us

AI Insights

TerrierGPT Provides BU Community with Free Access to Leading Chatbots | BU Today

Published

on


New tool is the result of a partnership between the University’s Artificial Intelligence Development Accelerator and Information Services & Technology

TerrierGPT, a new offering from Boston University’s AI Development Accelerator and IS&T, provides BU community members with free access to leading AI chatbots, including OpenAI’s ChatGPT and Google Gemini. Photo by Bob O’Connor

Academics

New tool is the result of a partnership between the University’s Artificial Intelligence Development Accelerator and Information Services & Technology

As the world of artificial intelligence continues to expand, Boston University is offering its own chatbot for staff, faculty, and students: TerrierGPT.

The free generative artificial intelligence (AI) tool is the result of a partnership between BU’s Artificial Intelligence Development Accelerator (AIDA) for Academic and Administrative Excellence, an initiative tasked with exploring how AI technologies can be used in academic settings, and Information Services & Technology (IS&T). 

TerrierGPT provides members of the BU community with free access to their choice of leading AI chatbots from OpenAI, Anthropic Claude, Amazon, Meta, and Google. The University’s community members can log into TerrierGPT at terriergpt.bu.edu using their Kerberos credentials.

Why bring chatbots to BU? 

First, because AI’s impact on daily life is continuing to expand in profound ways. 

“It’s become very obvious that generative AI is going to transform higher education and that the future workforce will need basic AI literacy skills on their résumés, independent of discipline,” says Kenneth Lutchen, BU’s vice president and associate provost for research and AIDA interim executive director. 

“Our mission at BU is to create holistic citizens that get a degree in a specific discipline, but have foundational skills and capabilities,” Lutchen says. “We’re trying to ensure they know how to use generative AI in the most constructive, productive way possible for themselves, for their careers, and for society.”

It’s also a matter of equity.

“We saw an unevenness [across BU] with respect to knowing what AI [is capable of], and having access to AI models,” says John Byers, AIDA codirector and a Faculty of Computing & Data Sciences professor of computer science. “At a high level, the main goal of TerrierGPT is to democratize access to AI and give people access to a bunch of the best models out there.”

Finally, your personal and BU-related data are safer within TerrierGPT than outside of it. 

“If you went to the free version of ChatGPT, for example, and entered your queries, that data has been sent to OpenAI and they can use it for whatever they want,” says Bob Graham, AIDA interim chief AI officer and IS&T associate vice president of enterprise architecture and applications. “For TerrierGPT, we established protections that mean any data entered into it is BU’s, and companies don’t have any right to that data.”

BU Today spoke to Lutchen, Byers, and Graham to get answers to some of the most commonly asked questions about TerrierGPT.

FAQ

What can TerrierGPT do?

TerrierGPT can be used for a variety of purposes, both personal and academic. For example, students can use TerrierGPT to generate study guides or model test questions, while faculty can use the tool to help create course syllabi or lesson plans. No one is required to use TerrierGPT, however.

Learn more about TerrierGPT use cases here.

Why offer access to different models?

Different models serve different needs. For instance, some models are better at logic and solving coding problems. Offering a variety of models allows users to select their preferred model or the model most appropriate for their tasks.

How should I approach using AI in an academic setting?

BU adopts a “critical embrace” theory toward generative AI in research and academia: that the technology should be utilized with sensible guardrails, while keeping in mind the benefits and limitations of AI. Overall, TerrierGPT should be used to augment, not replace, learning and instructional capabilities. Students and instructors should also be transparent about their use of AI. 

Find AIDA’s generative AI guidelines for students and faculty and staff here.

Is TerrierGPT secure?

Yes. Unlike when you use non-BU versions of ChatGPT and other chatbots, the information you enter will not be used as training data. 

AIDA’s website notes: “The platform complies with BU’s internal privacy and data protection policies—and none of the data entered is used to train external models. Data uploaded to the platform is only accessible by IS&T personnel and has the same strong privacy protections applicable to all BU enterprise data, such as emails and documents stored on BU’s OneDrive. However please note that TerrierGPT is not approved for use with restricted use data, including HIPAA-regulated information.”

What environmental concerns were factored into bringing this tool to BU?

Building an AI model from scratch requires a tremendous amount of energy. AIDA sought to leverage existing chatbot technology to significantly reduce the amount of resources needed to create TerrierGPT. The additional energy burden of using TerrierGPT is low.

What’s coming next for TerrierGPT and generative AI at BU?

There are future expansions planned for TerrierGPT, and related products, regarding new features and capabilities. BU also plans to launch an online generative AI literacy course for undergraduates, after which students will earn a digital certificate. For faculty, AIDA and the Institute for Excellence in Teaching & Learning are partnering to offer a series of symposiums this fall on using AI for instructional purposes. (Attendees must register for each event.) 

Look for more updates from AIDA as the academic year progresses.

Find the answers to more FAQ about TerrierGPT, including information about technical specifications, here.

Explore Related Topics:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Exclusive: Ex-Google DeepMinders’ algorithm-making AI company gets $5 million in seed funding

Published

on


Two former Google DeepMind researchers who worked on the company’s Nobel Prize-winning AlphaFold protein structure prediction AI as well as its AlphaEvolve code generation system have launched a new company, with the mission of democratizing access to advanced algorithms.

The company, which is called Hiverge, emerged from stealth today with $5 million in seed funding, led by Flying Fish Ventures with participation from Ahren Innovation Capital and Alpha Intelligence Capital. Legendary coder and Google chief scientist Jeff Dean is also an investor in the startup.

The company has built a platform it calls “Hive” that uses AI to generate and test novel algorithms to run vital business processes—everything from product recommendations to delivery routing— automatically optimizing them. While large companies that can afford to employ their own data science and machine learning teams do sometimes develop bespoke algorithms, this capability has been out of the reach of most medium and small businesses. Smaller firms have often had to rely on off-the-shelf software that comes with pre-built algorithms that may not be ideally suited for that particular business and its data.

The Hive system also promises the potential to discover unusual algorithms that may produce superior results that human data scientists might never be able to develop through intuition or trial-and-error, Alhussein Fawzi, the company’s cofounder and CEO told Fortune. “The idea behind Hiverge is really to empower those companies with the best, best-in-class algorithms,” he said.

“You can apply [the Hive] to machine learning algorithms, and then you can apply it to planning algorithms,” Fawzi explained. “These are the two things that are, in terms of algorithms, quite different, yet it actually improves on both of them.”

At Google DeepMind, Fawzi had led the team that in 2022 developed its AlphaTensor AI, which discovered new ways to do matrix multiplication, a fundamental mathematical process for training and running neural networks and many other computer applications. The following year, Fawzi and the team developed FunSearch, a method that used large language models to generate new coding approaches and then used an automated evaluator to weed out erroneous solutions.

He also worked on the early stages of what became Google DeepMind’s AlphaEvolve system, which uses several LLMs working together as agents to create entire new code bases for solving complex problems. Google has credited AlphaEvolve with finding ways to optimize its LLMs. For instance, it found a way to improve on the way Gemini does matrix multiplication to deliver a 23% speed-up; it also optimized another key step in the way Transformers, the kind of AI architecture on which LLMs are based, work, boosting speeds by 32%.

Cofounding Hiverge with him is his brother Hamza Fawzi, a professor of applied mathematics at the University of Cambridge, who is serving as a technical advisor to the company; and Bernardino Romera-Paredes, who was part of the Google DeepMind team that created AlphaFold and who is now Hiverge’s chief technology officer.

Hiverge has already demonstrated the utility of its Hive system by using it to win the Airbus Beluga Challenge, which calls on contestants to find the most optimal way of loading and storage of aircraft parts that are carried by an Airbus Beluga XL aircraft. The solution developed by Hiverge delivered a 10,000-times speed-up over the existing aircraft-loading algorithm. The company also showed that it could take a machine learning training algorithm that was already optimized and speed it up by another three times. And it has found novel ways to improve computer vision algorithms.

Alhussein Fawzi said that Hiverge, based in Cambridge, England, currently has six employees but that it would use the money raised in its latest funding round to expand its team. “We will also transition from research to building out our product,” he said. 

The company plans to make its technology accessible through cloud marketplaces like AWS and Google Cloud, where customers can directly use the system on their own code. The platform analyzes which parts of code represent bottlenecks, generates improved algorithms, and provides recommendations to engineers.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

AI Insights

AI in classroom NC | Board of education considers policy for artificial intelligence in Wake County Public School district

Published

on


CARY, N.C. (WTVD) — The Wake County Board of Education held the first in a series of meetings to discuss the development of the district’s AI policy.

Board members learned about a number of topics related to AI, including how AI is being used now and the potential risks associated with the use of the growing software.

WCPSS Superintendent Dr. Robert Taylor said the district wanted board members to be well-informed now before developing a district-wide policy.

“The one thing I wanted to make sure is that we didn’t create a situation where we restrict something that is going to be a part of society, that our students are going to be responsible for learning that our teachers are going to be responsible for doing so,” he said.

A team from Amazon Web Services, or AWS, gave the board an informational presentation on AI.

District staff say there is no timeline for the adoption of the policy right now.

AI could be a major tool for the district, with the board saying it could help with personalized learning plans.

Still, some board members expressed concerns on how to teach students to use AI responsibly.

“I think the biggest concern that everyone has is academic integrity and honesty, things that can be used with AI to give false narratives, false pictures,” said Dr. Taylor.

Mikaya Thurmond is an AI expert and lecturer. She says the district needs to consider including AI training for teachers and develop rules governing students’ AI usage for their policy framework.

“If anyone believes that students are not using AI to get to conclusions and to turn in homework, at this point, they’re just not being honest about it,” she said.

For starters, she says students should credit AI when used on assignments and show their chat history with AI programs.

“That tells me you’re at least doing the critical thinking part,” said Thurmond. “And then there should be some assignments that no eye is allowed for and some where it is integrated. But I think that there has to be a mixture once educators know how to use it themselves.”

Something the superintendent and Thurmond agree on is parental involvement.

They both say parents should be having conversations now with their children about appropriate conversations to have with AI.

Copyright © 2025 WTVD-TV. All Rights Reserved.



Source link

Continue Reading

AI Insights

UK to be first college in KY to offer Artificial Intelligence as a major

Published

on


LEXINGTON, Ky. (LEX 18) — The enthusiasm coming from Dr. Brent Harrison was jumping through the screen.

“I am very excited,” he said, as we discussed a big development that recently happened on the University of Kentucky campus.

Last week, the school’s Board of Trustees approved the state’s first Artificial Intelligence major, which will offer a Bachelor of Science degree. Some hurdles remain, as approval is still needed from the Kentucky Council on Postsecondary Education and the Southern Association of Colleges and Schools.

“This is something our department chair, Zongming Fei, was in favor of,” Harrison explained. “He said, ‘We have to do this; we see the desire from our students; we see the way the job market is going.'”

Artificial Intelligence is the future, according to those who’ve grasped the technology and believe in its impact and benefits. Dr. Harrison, who said the curriculum is already in place for a potential launch in the fall of 2026, says it’ll cover all aspects of the concentration.

“Pretty much anything we’re doing with AI is having that ethics component. Dr. Judy Goldsmith, one of my colleagues here, was very adamant that no matter what we’re doing, the students have to be aware of the potential pitfalls and other issues that come up when using AI,” Dr. Harrison said.

Currently, the university offers a certificate in AI training, which is useful for those who might only need some components, but by offering it as a major course of study, Dr. Harrison believes doors will be opened to its graduates like never before. It’s Computer Science on steroids, for lack of a better term.

“This is the kind of degree you could go out and be a software developer, but you would be more practiced in using these AI tools to make yourself more efficient. You could also go into things like data analytics. And, I’ll go ahead and say it, you could go into game design, game development,” Dr. Harrison said.

He also noted that the interest is much higher than he initially thought it would be. No one has (or can) declare AI as their major right now, but he anticipates many will. And he’s expecting some students to either switch majors or add AI to complete a double major program of study.

“I think the interest is there, and I think we’re going to see that, but I do expect the enrollment to pick up over 2 or 3 years,” he predicted, again pending the approval of the state’s CPE and SACS.





Source link

Continue Reading

Trending