California State University is the nation’s largest public four-year system, with nearly half a million students. So it’s a big deal that starting this year, CSU will make OpenAI’s ChatGPT available to all those students and its faculty.
It’s also controversial.
The effort will cost the CSU nearly $17 million — even as the system faces a $2.3 billion budget gap. That gap persists despite a tuition increase and spending cuts that have meant fewer course offerings for students.
University officials say providing access to quality AI tools is an issue of equity. Across the system’s 23 campuses, some students were paying for their own ChatGPT subscriptions. CSU spokesperson Amy Bentley-Smith said the goal is for all students to have “the same access to the tools and learning opportunities that will prepare them for the future,” regardless of which campus they attend.
Those efforts point to why, in recent years, OpenAI has established business deals with universities across the country, including multiple public institutions.
At what cost? For the most part, the company and colleges involved have not made the price of these services publicly available.
Public records obtained by LAist for universities across the country show CSU’s massive scale appears to have helped it land a deal on ChatGPT subscriptions, and that CSU has invested far more aggressively in generative AI tools than other public universities, both in dollars spent and in the number of students served.
In an email earlier this year, CSU’s Bentley-Smith said the system leveraged economies of scale “to ensure that the CSU paid the lowest price possible.”
What is ChatGPT Edu? And how are students and faculty using it?
University officials say they want students at each Cal State campus to have access to the same learning tools.
OpenAI provides these schools with a product called ChatGPT Edu, a chatbot specifically designed for college settings.
ChatGPT Edu is tailored to each campus. And with it, universities provide access to a mix of OpenAI products. This can include the company’s flagship model, ChatGPT‑5, along with other tools. ChatGPT Edu also enables students and faculty to make custom AI models. At Columbia University in New York City, for instance, faculty researchers built a prediction tool to help reduce overdose fatalities. The model analyzes and synthesizes large datasets to inform interventions and, according to OpenAI, it enables the team to reduce “weeks of research work into seconds.”
ChatGPT Edu can also help with more run-of-the-mill work. Some of the tasks the company suggests include: personalized tutoring for students, helping researchers write grant applications and assisting faculty with grading.
A version of ChatGPT is available at no cost, but ChatGPT Edu allows for higher message limits than the free version. OpenAI also says student and faculty data are kept private and aren’t used to train its models.
(LAist requested an interview with Leah Belsky, OpenAI’s vice president of education, but a spokesperson said she was not available by the time of publication.)
Earlier this year, CSU launched a new AI initiative — a public-private partnership between the university system; the California governor’s office; and 10 tech giants, including Microsoft, NVIDIA and OpenAI.
We still have a half a million students, even though we’re being challenged with funding in general. And we still want to prepare them for the future.
— Leslie Kennedy, assistant vice chancellor, CSU
The university system also put out a call for proposals, asking faculty to come up with strategies “that prepare students to thrive in a rapidly evolving digital world while promoting critical thinking, ethical AI use and integrating AI literacy into curricula.” In May, the CSU announced that it received more than 400 proposals from over 750 faculty members. Ultimately, 63 of those proposals were awarded $30,000 to $60,000 grants.
Leslie Kennedy, assistant vice chancellor of academic technology services, told LAist the $3 million in grants came from the division of academic and student affairs, which provides funding for special projects.
How CSU’s OpenAI contract compares to other public U.S. universities
The basic, free version of ChatGPT provides limited access to the chatbot’s core functions. For access to more advanced features, the average user can subscribe to ChatGPT Plus for $20 a month. (The highest level of access costs $200 a month.)
Public records show that, during the first six months of this year, CSU paid roughly $1.9 million dollars to make ChatGPT Edu available to 40,000 users. From July 2025 to June 2026, the university system will pay another $15 million to make the product available to 500,000 users. This, said spokesperson Bentley-Smith in an email, “represents a significant savings.”
It’s also clear CSU secured a far lower cost–per-student than other universities, in contracts reviewed by LAist:
Money flows the other direction, too. Redacted documents obtained from Texas A&M indicate that, in February 2025, OpenAI made a one-time gift of $50,000 to the Texas A&M Foundation, a nonprofit that operates independently from the university and manages private donations. The gift was earmarked for the development of generative AI literacy at the Texas A&M Engineering Experiment Station.
LAist asked other universities with OpenAI deals if they have plans to make ChatGPT available to everyone on campus; ASU, for example, enrolls nearly 200,000 students. At UMass Lowell, with more than 17,000 students, spokesperson David Joyner told LAist via email that “the university is evaluating generative AI options we may be able to offer broadly. However, we do not have current plans to license community-wide ChatGPT access.” Other universities had not responded as of publication.
LAist is still waiting on public records for other public universities in the U.S. We will update this story as we obtain that information.
Friction between CSU leaders and educators
Despite CSU’s sizable discount, some educators have criticized the system’s gambit.
“For me, it’s frightening,” said Kevin Wehr, a sociology professor at Sacramento State and chair of the California Faculty Association’s bargaining team. “I already have all sorts of problems with students engaging in plagiarism. This feels like it takes a shot of steroids and injects it in the arm of that particular beast.”
But Wehr said AI buy-in among faculty runs the gamut.
“I know faculty who are embracing [AI] with open arms. [They’re] building it into their curriculum, building it into their assignments, teaching students how to use these tools and how not to use these tools and what to be aware of and what the pitfalls and problems are,” he said. “And I want them to feel completely supported in their work.”
AI Microcredential for CSU Students
The CSU is offering free AI training for everyone in the system; the modules explain what AI is and how to use it. They also cover ethical and responsible use. According to assistant vice chancellor Kennedy, the modules were developed by San Diego State, Fresno State and Cal State Monterey Bay. Once students complete the exercises, they will earn a microcredential, which could give them an edge when they’re on the job hunt, she added.
Wehr told LAist that faculty members’ foremost concern is the technology’s capacity to reshape the workforce. Faculty are also worried about harm to the environment, given the massive power needs to run AI, which include the increased use of fresh water to cool data centers.
Above all, Wehr expressed distaste for the expense at a time when the CSU is struggling financially. In his view, it speaks to misplaced priorities.
“We are cutting programs. We are merging campuses. We are laying off faculty. We are making it harder for students to graduate,” Wehr said. And instead of using that money to ameliorate those issues, he added, “we’re giving it to the richest technology companies in the world.”
Bentley-Smith, the CSU spokesperson, said the money for ChatGPT Edu “came from one-time funds from the Chancellor’s Office, not from university budgets.”
The university’s vision for the future
LAist brought the faculty’s concerns to assistant vice chancellor Kennedy and the CSU’s chief information officer Ed Clark, who oversees IT strategy across CSU. Clark said the move was necessary from a system perspective.
“We were seeing certain universities that had more resources saying: ‘We’re going to go sign a deal with X vendor or Y vendor,’” he said. “That was already happening in the CSU. For us at the system office, that’s not OK, because then … you’re a faculty member or a student at the wrong campus, you don’t have access. So we wanted to make sure everybody had access.”
Clark added that the deal is important for individuals, too.
“Many people had already signed up for individual licenses for ChatGPT, whether they were faculty, staff, or students, but only the people that could afford that were doing that,” he said. “That was also leaving our community behind. So, we wanted to think about it system-wide in terms of access, equity, outcomes for all, and that is why we went system-wide.”
His words for critics? “Join us,” he said. “ Help us shape the future.”
Kennedy also responded to complaints from faculty who aren’t pleased with the pivot to AI.
“We still have a half a million students, even though we’re being challenged with funding in general. And we still want to prepare them for the future,” she said. “That’s why we moved ahead.”
The systemwide initiative was developed with recommendations from a CSU Generative AI committee. Members included CSU faculty senate appointees, a Cal State Student Association appointee and representatives from other CSU stakeholder groups. Three of about two dozen members were part of the faculty.
What happens when the deal expires?
Clark said the system has hired a firm to assess CSU’s return on its OpenAI investment. The success of the deal will be determined by taking a host of factors into account, including how ChatGPT is used in the classroom; student learning outcomes; administrative productivity; and environmental impact.
CSU’s contract with OpenAI is for 18 months total. It’s unclear how expensive a renewal would be, or how the system might use its leverage to negotiate with other companies.
“We knew things could change so rapidly that maybe something else would emerge and we said ‘maybe this is better now,” Clark said. “We did want to have that flexibility, but we are also very very happy with our partnership with OpenAI and the ChatGPT rollout at this moment.”
AI Transformation (AX) using artificial intelligence (AI) is spreading throughout the domestic financial sector. Beyond simple digital transformation (DX), the strategy is to internalize AI across organizations and services to achieve management efficiency, work automation, and customer experience innovation at the same time. Financial companies are moving the judgment that it will be difficult to survive unless they raise their AI capabilities across the company in an environment where regulations and competition are intensifying. AX’s core is internal process innovation and customer service differentiation. AI can reduce costs and secure speed by quickly and accurately handling existing human-dependent tasks such as loan review, risk management, investment product recommendation, and internal counseling support.
At customer contact points, high-quality counseling is provided 24 hours a day through AI bankers, voice robots, and customized chatbots to increase financial service satisfaction. Industry sources say, “AX is not just a matter of technology, but a structural change that determines financial companies’ competitiveness and crisis response.”
First of all, major domestic banks and financial holding companies began to introduce in-house AI assistant and private large language model (LLM), establish a dedicated organization, and establish an AI governance system at the level of all affiliates. It is trying to automate internal work and differentiate customer services at the same time by establishing a strategic center at the group company level or introducing collaboration tools and AI platforms throughout the company.
KB Financial Group has established a ‘KB AI strategy’ and a ‘KB AI agent roadmap’ to introduce more than 250 AI agents to 39 core business areas of the group. It has established the ‘KB GenAI Portal’ for the first time in the financial sector to create an environment in which all executives and employees can utilize and develop AI without coding, and through this, it is efficiently changing work productivity and how they work.
Shinhan Financial Group is increasing work productivity with cloud-based collaboration tools (M365+Copilot) and introducing AI to the site by affiliates. Shinhan Bank placed Generative AI bankers at the window through the “AI Branch,” and in the application “SOL,” “AI Investment Mate” provides customized information to customers through card news.
Hana Bank is operating a “foreign exchange company AI departure prediction system” using its foreign exchange expertise. It is a structure that analyzes 253 variables based on past transaction data to calculate the possibility of suspension of transactions and automatically guides branches to help preemptively respond.
Woori Financial Group established an AI strategy center within the holding under the leadership of Chairman Lim Jong-ryong and deployed AI-only organizations to all affiliates, including banks, cards, securities, and insurance.
Internet banks are trying to differentiate themselves by focusing on interactive search and calculation machines, forgery and alteration detection, customized recommendations, and spreading in-house AI culture. As there is no offline sales network, it is actively strengthening customer contact AI innovation such as app and mobile counseling.
Kakao Bank has upgraded its AI organization to a group and has more than 500 dedicated personnel. K-Bank achieved a 100% recognition rate with its identification card recognition solution using AI, and started to set standards by publishing papers to academia. Toss Bank uses AI to determine ID forgery and alteration (99.5% accuracy), automate mass document optical character recognition (OCR), convert counseling voice letters (STT), and build its own financial-specific language model.
Insurance companies are increasing accuracy, approval rate, and processing speed by introducing AI in the entire process of risk assessment, underwriting, and insurance payment. Due to the nature of the insurance industry, the effect of using AI is remarkable as the screening and payment process is long and complex.
Samsung Fire & Marine Insurance has more than halved the proportion of manpower review by automating the cancer diagnosis and surgical benefit review process through ‘AI medical review’. The machine learning-based “Long-Term Insurance Sickness Screening System” raised the approval rate from 71% to 90% and secured patents.
Industry experts view this AI transformation as a paradigm shift in the financial industry, not just the introduction of technology. It is necessary to create new added value and customer experiences beyond cost reduction and efficiency through AI. In particular, it is evaluated that the differentiation of financial companies will be strengthened only when AI and data are directly connected to resolving customer inconveniences.
However, preparing for ethical, security, and accountability issues is considered an essential task as much as the speed of AI’s spread. Failure to manage risks such as the impact of large language models on financial decision-making, personal information protection, and algorithmic bias can lead to loss of trust. This means that the process of developing accumulated experiences into industrial standards through small experiments is of paramount importance.
The first you know about it is when you find out someone has accessed one of your accounts. You’ve been careful with your details so you can’t work out what has gone wrong, but you have made one mistake – recycling part of your password.
Reusing the same word in a password – even if it is altered to include numbers or symbols – gives criminals a way in to your accounts.
Brandyn Murtagh, an ethical “white hat” hacker, says information obtained through data breaches on sites such as DropBox and Tumblr and through cyber-attacks has been circulating on the internet for some time.
Hackers obtain passwords and test them out on other websites – a practice known as credential stuffing – to see whether they can break into accounts.
But in some cases they do not just try the exact passwords from the hacked data: as well as credential stuffing, the fraudsters also attempt to access accounts with derivations of the hacked password.
Research from Virgin Media O2 suggests four out of every five people use the same or nearly identical passwords on online accounts.
Using a slightly altered passwords – such as Guardian1 instead of Guardian – is almost an open door for hackers to compromise online accounts, Murtagh says.
Working with Virgin Media O2, he has shown volunteers how easy it is to trace their password when they supply their email address, often getting a result within minutes.
A spokesperson for Virgin Media O2 says: “Human behaviour is quite easy to model. [Criminals] know, for example, you might use one password and then add a full stop or an exclamation mark to the end.”
What the scam looks like
The criminals use scripts – automated sets of instructions for the computer – to go through variations of the passwords in an attempt to access other accounts. This can happen on an industrial scale, says Murtagh.
“It’s very rare that you are targeted as an individual – you are [usually] in a group of thousands of people that are getting targeted. These processes scale just like they would in business,” he says.
You might be alerted by messages saying that you have been trying to change your email address or other details connected to an account.
What to do
Change any passwords that are variations on the same word – Murtagh advises starting with the most important four sets of accounts: banks, email, work accounts and mobile.
Use a password managers – these are often integrated into web browsers. Apple has iCloud Keychain while Androids have Google Password Manager, both of which can suggest and save complicated passwords.
Put in place two-factor authentication or multi-factor authentication (2FA or MFA), which mean means you have two steps to log into a site.