Connect with us

Tools & Platforms

Oracle and OpenAI Forge Ahead with Stargate Deal Expansion!

Published

on


AI Powerhouse Collaboration Intensifies

Last updated:

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

Oracle and OpenAI have announced an expansion of their existing Stargate partnership, aimed at bolstering data center infrastructure in the U.S. This move signifies a deepening commitment to enhancing computing power and AI capabilities. The expanded deal will allow for a significant increase in server capacity, supporting OpenAI’s ambitious AI models and Oracle’s growing cloud services.

Banner for Oracle and OpenAI Forge Ahead with Stargate Deal Expansion!

Oracle and OpenAI Expand Stargate Deal

Oracle and OpenAI have recently intensified their collaboration, expanding the ambitious Stargate project to include even more U.S. data centers. This strategic partnership aims to enhance the digital infrastructure that supports OpenAI’s advanced AI capabilities. By leveraging Oracle’s extensive cloud services and expansive network, OpenAI plans to scale its operations, ensuring that its cutting-edge technologies are more robust and widely accessible. The goal is to meet the growing demand for AI solutions across various sectors, ranging from healthcare to finance. This development is poised to significantly impact the cloud computing landscape, promising to drive innovation and efficiency across industries .

As the tech titans Oracle and OpenAI collaborate to expand their foothold through additional data centers, several related events have catalyzed this decision. The increasing reliance on cloud computing and artificial intelligence has put immense pressure on existing infrastructure, necessitating new developments to accommodate burgeoning technological demands. Additionally, the strategic placement of these data centers is likely to spur local economies, create jobs, and improve network speed and reliability for both businesses and consumers. Industry experts believe that this move could set a precedent for other companies, encouraging further investments in data infrastructure and innovation within the United States.

Public reaction to the announcement of new data centers has been overwhelmingly positive, with many seeing it as a step forward in enhancing America’s tech industry capabilities. The deal, however, has sparked discussions about the environmental implications of such large-scale operations. Constructing and maintaining data centers pose significant energy demands, raising concerns about sustainability. Nevertheless, both Oracle and OpenAI have expressed commitments to integrating eco-friendly practices and renewable energy sources into their new facilities, aiming to minimize the ecological footprint. According to stakeholders, these measures reflect a future-oriented approach that balances technological advancement with environmental responsibility. Learn more about these green commitments.

Impacts on Cloud Infrastructure

As cloud technology continues to evolve, the expansion of data infrastructure remains pivotal to supporting the growing demands for computing power and storage. The recent deal between Oracle and OpenAI, as outlined in their expanded Stargate deal, underscores a significant impact on cloud infrastructure in the United States. This partnership aims to enhance the scalability and efficiency of cloud services, addressing the increasing needs for robust data processing capabilities.

Leveraging Oracle’s expertise in cloud solutions, the collaboration with OpenAI reinforces the trend of integrating advanced AI technologies with cloud infrastructure. By expanding data centers, Oracle and OpenAI can provide improved access to cutting-edge AI models, ensuring that businesses can operate more intelligently and efficiently. The Oracle-OpenAI partnership highlights how collaborations in tech can lead to advancements in infrastructure that are pivotal for sustaining innovation and competitiveness.

The impact of such expansions on cloud infrastructure extends beyond just hardware improvements. It involves a significant enhancement in data security, energy efficiency, and latency reduction, making cloud services more reliable and cost-effective. The new data centers resulting from the Oracle-OpenAI deal are expected to lead to economic growth within local communities through job creation and increased technological investments.

Moreover, the strategic location of these new data centers can significantly reduce network latencies and enhance the quality of service for end-users across the nation. By situating data centers closer to where data is generated and utilized, Oracle and OpenAI are poised to create more resilient and responsive cloud environments. The ongoing collaboration highlights a pivotal shift towards more localized cloud solutions, catering specifically to regional demands and constraints.

Stakeholder Perspectives

Understanding stakeholder perspectives is crucial in comprehending the broader implications of any strategic partnership, such as the recent expansion of the Stargate deal between Oracle and OpenAI. This expansion includes more U.S. data centers, highlighting a significant commitment to advancing technological infrastructure. Stakeholders from various sectors, including technology, business, and government, have weighed in, reflecting diverse opinions. The collaboration is seen by many as a positive step towards enhancing data processing capabilities and fostering innovation. However, some stakeholders express concerns regarding data privacy and the environmental impact of increased data center operations, sparking a debate on the balance between technological progress and ethical considerations.

The technology sector views the Oracle and OpenAI partnership as a pivotal move in strengthening U.S. leadership in AI and cloud services. This expansion is expected to enhance computational power and data storage capabilities, making it a strategic advantage in the ever-evolving competitive landscape. Insights from experts within the field indicate that this could lead to more robust AI applications and innovative solutions across various industries. Yet, there are calls for vigilance, especially around the implementation of ethical AI practices and responsibility towards consumer data protection, aligning with global standards to ensure sustainable and fair technology growth.

Public reaction has been a mixed bag, with some applauding the potential growth in jobs and economic benefits from the establishment of new data centers. These developments promise to revitalize local economies and provide significant employment opportunities, reflecting a sense of optimism about future prospects. On the other hand, there are concerns about the carbon footprint associated with data centers and calls for policies that prioritize renewable energy sources. Overall, there is a general consensus that while the immediate benefits are evident, long-term strategies need to account for environmental sustainability, ensuring that technological advancement does not come at the expense of the planet’s health.

Public Reaction to the Expansion

The expansion deal between Oracle and OpenAI to establish more data centers across the United States has elicited a range of reactions from the public. Many individuals have expressed optimism about the job opportunities that the deal is expected to create, particularly in regions traditionally lacking in technological infrastructure. This optimism is further fueled by the promise of enhancing America’s technological capabilities, positioning it as a leader in global AI development. More details on this development can be found in the full report on PoliticoPRO.

Conversely, there are those who have voiced concerns regarding the environmental impact of such technological expansions. Critics highlight the substantial energy consumption associated with operating large-scale data centers, urging companies to commit to sustainable practices. This controversial issue has sparked debates on the balance between technological advancement and environmental stewardship. Additional insights and public opinions are available in the comprehensive PoliticoPRO article.

In social media circles, discussions about the deal have been vibrant, with many internet users debating the potential societal changes that increased data center infrastructure might bring. While some anticipate positive transformations in digital communication and information accessibility, others worry about privacy concerns and the monopolization of technological power by large corporations. These concerns reflect broader public apprehensions about the future role of AI in daily life. For a deeper exploration of these issues, the original article provides an extensive analysis.

Potential Future Developments

The collaboration between Oracle and OpenAI marks a significant push towards expanding data infrastructure, particularly through the Stargate project. As reported by industry insiders, this expansion involves setting up a series of new data centers across the United States, a move designed to enhance computational capabilities and support escalating demands for AI-driven technologies. By fortifying the digital backbone, Oracle aims to solidify its position in the AI field, offering greater capacity for data processing and storage, which is essential to accommodate future technological advancements ().



Source link

Tools & Platforms

Polimorphic Raises $18.6M as It Beefs Up Public-Sector AI

Published

on


The latest best on public-sector AI involves Polimorphic, which has raised $18.6 million in a Series A funding round led by General Catalyst.

The round also included M13 and Shine.

The company raised $5.6 million in a seed round in late 2023.


New York-based Polimorphic sells such products as artificial intelligence-backed chatbots and search tools, voice AI for calls, constituent relationship management (CRM) and workflow software, and permitting and licensing tech.

The new capital will go toward tripling the company’s sales and engineering staff and building more AI product features.

For instance, that includes the continued development of the voice AI offering, which can now work with live data — a bonus when it comes to utility billing — and even informs callers to animal services which pets might be up for adoption, CEO and co-founder Parth Shah told Government Technology in describing his vision for such tech.

The company also wants to bring more AI to CRM and workflow software to help catch errors on applications and other paperwork earlier than before, Shah said.

“We are more than just a chatbot,” he said.

Challenges of public-sector AI include making sure that public agencies truly understand the technology and are “not just slapping AI on what you already do,” Shah said.

As he sees it, working in governments in such a way has helped Polimorphic to nearly double its customer count every six months. The company has more than 200 public-sector departments at the city, county and state levels using the company’s products, he said — and such growth is among the reasons the company attracted this new round of investment.

The company’s general sales pitch is increasingly familiar to public-sector tech buyers: Software and AI can help agencies deal with “repetitive, manual tasks, including answering the same questions by phone and email,” according to a statement, and help people find civic and bureaucratic information more quickly.

For instance, the company says it has helped customers reduce voicemails by up to 90 percent, with walk-in requests cut by 75 percent. Polimorphic clients include the city of Pacifica, Calif.; Tooele County, Utah; Polk County, N.C.; and the town of Palm Beach, Fla.

The fresh funding also will help the company expand in the company’s top markets, which include Wisconsin, New Jersey, North Carolina, Texas, Florida and California.

The company’s investors are familiar to the gov tech industry. Earlier this year, for example, General Catalyst led an $80 million Series C funding round for Prepared, a public safety tech supplier focused on bringing more assistive AI capabilities to emergency dispatch.

“Polimorphic has the potential to become the next modern system of record for local and state government. Historically, it’s been difficult to drive adoption of these foundational platforms beyond traditional ERP and accounting in the public sector,” said Sreyas Misra, partner at General Catalyst, in the statement. “AI is the jet fuel that accelerates this adoption.”

Thad Rueter writes about the business of government technology. He covered local and state governments for newspapers in the Chicago area and Florida, as well as e-commerce, digital payments and related topics for various publications. He lives in Wisconsin.





Source link

Continue Reading

Tools & Platforms

AI enters the classroom as law schools prep students for a tech-driven practice

Published

on


When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.

“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”

Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.

O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.

“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”

Professor Dyane O’Leary, director of Suffolk University Law School’s Legal Innovation & Technology Center, teaches a generative AI course in which students assess the ethics of AI in the legal context and, after experimentation, assess the strengths and weaknesses of various AI tools for a range of legal tasks.

One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.

“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”

The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.

This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.

“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”

These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.

Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.

Proactive online learning

Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.

“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”

The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.

Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.

While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.

“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”

Balanced approach

Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.

Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.

WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.

Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.

“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”

Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.

“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.

Forward-thinking vision

Drake University Law School has launched a new AI Law Certificate Program for J.D. students.

The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.

Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.

Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.

Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.

“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said. 

Simulated, but realistic

Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.

“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”

These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.

Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.

Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.

“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”

O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.

“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”





Source link

Continue Reading

Tools & Platforms

Microsoft pushes billions at AI education for the masses • The Register

Published

on


After committing more than $13 billion in strategic investments to OpenAI, Microsoft is splashing out billions more to get people using the technology.

On Wednesday, Redmond announced a $4 billion donation of cash and technology to schools and non-profits over the next five years. It’s branding this philanthropic mission as Microsoft Elevate, which is billed as “providing people and organizations with AI skills and tools to thrive in an AI-powered economy.” It will also start the AI Economy Institute (AIEI), a so-called corporate think tank stocked with academics that will be publishing research on how the workforce needs to adapt to AI tech.

The bulk of the money will go toward AI and cloud credits for K-12 schools and community colleges, and Redmond claims 20 million people will “earn an in-demand AI skilling credential” under the scheme, although Microsoft’s record on such vendor-backed certifications is hardly spotless.

“Working in close coordination with other groups across Microsoft, including LinkedIn and GitHub, Microsoft Elevate will deliver AI education and skilling at scale,” said Brad Smith, president and vice chair of Microsoft Corporation, in a blog post. “And it will work as an advocate for public policies around the world to advance AI education and training for others.”

It’s not an entirely new scheme – Redmond already had its Microsoft Philanthropies and Tech for Social Impact charitable organizations, but they are now merging into Elevate. Smith noted Microsoft has already teamed up with North Rhine-Westphalia in Germany to train students on AI, and says similar partnerships across the US education system will follow.

Microsoft is also looking to recruit teachers to the cause.

On Tuesday, Microsoft, along with Anthropic and OpenAI, said it was starting the National Academy for AI Instruction with the American Federation of Teachers to train teachers in AI skills and to pass them on to the next generation. The scheme has received $23 million in funding from the tech giants spread over five years, and aims to train 400,000 teachers at training centers across the US and online.

“AI holds tremendous promise but huge challenges—and it’s our job as educators to make sure AI serves our students and society, not the other way around,” said AFT President Randi Weingarten in a canned statement.

“The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.”

Meanwhile, the AIEI will sponsor and convene researchers to produce publications, including policy briefs and research reports, on applying AI skills in the workforce, leveraging a global network of academic partners.

Hopefully they can do a better job of it than Redmond’s own staff. After 9,000 layoffs from Microsoft earlier this month, largely in the Xbox division, Matt Turnbull, an executive producer at Xbox Game Studios Publishing, went viral with a spectacularly tone-deaf LinkedIn post (now removed) to former staff members offering AI prompts “to help reduce the emotional and cognitive load that comes with job loss.” ®



Source link

Continue Reading

Trending