Tools & Platforms
Georgia Tech to Build $20M National AI Supercomputer

Georgia Tech is also a host to the PACE Hive Gateway supercomputer (above). Nexus will use AI to accelerate scientific breakthroughs.
The National Science Foundation (NSF) has awarded Georgia Tech and its partners $20 million to build a powerful new supercomputer that will use artificial intelligence (AI) to accelerate scientific breakthroughs.
“Georgia Tech is proud to be one of the nation’s leading sources of the AI talent and technologies that are powering a revolution in our economy,” said Ángel Cabrera, president of Georgia Tech. “It’s fitting we’ve been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We’re grateful to the NSF, and we are excited to get to work.”
Designed from the ground up for AI, Nexus will give researchers across the country access to advanced computing tools through a simple, user-friendly interface. It will support work in many fields, including climate science, health, aerospace, and robotics.
“The Nexus system’s novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,” said Katie Antypas, National Science Foundation director of the Office of Advanced Cyberinfrastructure. “We look forward to adding Nexus to NSF’s portfolio of advanced computing capabilities for the research community.”
Nexus Supercomputer — In Simple Terms
- Built for the future of science: Nexus is designed to power the most demanding AI research — from curing diseases, to understanding how the brain works, to engineering quantum materials.
- Blazing fast: Nexus can crank out over 400 quadrillion operations per second — the equivalent of everyone in the world continuously performing 50 million calculations every second.
- Massive brain plus memory: Nexus combines the power of AI and high-performance computing with 330 trillion bytes of memory to handle complex problems and giant datasets.
- Storage: Nexus will feature 10 quadrillion bytes of flash storage, equivalent to about 10 billion reams of paper. Stacked, that’s a column reaching 500,000 km high — enough to stretch from Earth to the moon and a third of the way back.
- Supercharged connections: Nexus will have lightning-fast connections to move data almost instantaneously, so researchers do not waste time waiting.
- Open to U.S. researchers: Scientists from any U.S. institution can apply to use Nexus.
AI is rapidly changing how science is investigated. Researchers use AI to analyze massive datasets, model complex systems, and test ideas faster than ever before. But these tools require powerful computing resources that — until now — have been inaccessible to many institutions.
This is where Nexus comes in. It will make state-of-the-art AI infrastructure available to scientists all across the country, not just those at top tech hubs.
“This supercomputer will help level the playing field,” said Suresh Marru, principal investigator of the Nexus project and director of Georgia Tech’s new Center for AI in Science and Engineering (ARTISAN). “It’s designed to make powerful AI tools easier to use and available to more researchers in more places.”
Srinivas Aluru, Regents’ Professor and senior associate dean in the College of Computing, said, “With Nexus, Georgia Tech joins the league of academic supercomputing centers. This is the culmination of years of planning, including building the state-of-the-art CODA data center and Nexus’ precursor supercomputer project, HIVE.”
Like Nexus, HIVE was supported by NSF funding. Both Nexus and HIVE are supported by a partnership between Georgia Tech’s research and information technology units.
A National Collaboration
Georgia Tech is building Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, which runs several of the country’s top academic supercomputers. The two institutions will link their systems through a new high-speed network, creating a national research infrastructure.
“Nexus is more than a supercomputer — it’s a symbol of what’s possible when leading institutions work together to advance science,” said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech’s College of Computing. “I’m proud that my two academic homes have partnered on this project that will move science, and society, forward.”
Georgia Tech will begin building Nexus this year, with its expected completion in spring 2026. Once Nexus is finished, researchers can apply for access through an NSF review process. Georgia Tech will manage the system, provide support, and reserve up to 10% of its capacity for its own campus research.
“This is a big step for Georgia Tech and for the scientific community,” said Vivek Sarkar, the John P. Imlay Dean of Computing. “Nexus will help researchers make faster progress on today’s toughest problems — and open the door to discoveries we haven’t even imagined yet.”
Tools & Platforms
Larry Ellison Oxford investment: Larry Ellison’s $1.3 billion bet to turn Oxford into the Next Silicon Valley: Inside the tech giant’s vision to revolutionize innovation, AI, and global health with the Ellison Institute of Technology

Central to this ambitious plan is the Ellison Institute of Technology (EIT), a sprawling research campus backed by a £1 billion investment and set to open by 2027.
This initiative is designed to blend advanced science, artificial intelligence, and sustainable innovation with Oxford’s academic excellence, creating an ecosystem where groundbreaking discoveries can thrive and scale.
Ellison’s vision extends beyond traditional philanthropy. By partnering closely with the University of Oxford and dedicating significant funding to joint research and scholarships, the EIT aims to foster a self-sustaining network focused on solving global challenges in healthcare, clean energy, and food security.
Ellison’s projects also include preserving the city’s culture and history. One of the most striking examples is The Eagle and Child pub, known for hosting literary legends like J.R.R. Tolkien and C.S. Lewis.
Ellison plans to restore the pub while integrating it into his broader vision for the city. It will remain a place of history and culture, but also a space where ideas, learning, and innovation meet.This investment will drive significant economic impact, expecting to create around 5,000 jobs, more than doubling the workforce of Bill Gates’s foundation. Ellison has also acquired local landmarks like the Eagle and Child pub, symbolizing his deep-rooted commitment to Oxford’s transformation.
What is the Ellison Institute of Technology?
At the center of Ellison’s vision is the Ellison Institute of Technology, or EIT. This is not just a lab. It’s a $1.3 billion research campus. When it opens in 2027, it will include massive labs, supercomputing facilities, and a medical clinic focused on oncology and preventive care.
The institute aims to tackle big global problems. Health, climate change, food security, and artificial intelligence are the main focus areas. Ellison wants top scientists and researchers to work there. He also plans to fund major collaborations with the University of Oxford. One of the standout projects is a vaccine research program using artificial intelligence. This initiative aims to speed up vaccine development and make treatments more effective, especially for diseases that are difficult to prevent.
The EIT is also designed to be visually striking. It is being built with modern architecture that complements Oxford’s historic cityscape. The campus reflects Ellison’s goal: combine cutting-edge innovation with traditional prestige.
Why is Ellison buying a historic pub?
If building a tech campus wasn’t enough, Ellison is also buying historic sites. One notable example is The Eagle and Child pub. This isn’t just any pub. It’s famous as the meeting place of J.R.R. Tolkien and C.S. Lewis, two of the world’s most beloved authors.
Ellison purchased the pub for a large sum and plans a major renovation. The goal is to preserve the literary history while giving it a new purpose. After the refurbishment, it will serve as a hub for scholars and innovators, blending the old charm of Oxford with a space for modern collaboration.
This move shows that Ellison’s vision is not only about money or technology. It’s about culture, legacy, and creating a city where history and innovation coexist.
Who is Larry Ellison
Larry Ellison is the co-founder of Oracle Corporation, a global leader in database software and cloud computing. He started the company in 1977 with just $2,000, transforming it from a small startup into one of the world’s largest software firms.
Ellison initially served as Oracle’s CEO until 2014 and now holds the positions of chairman and chief technology officer. His vision and leadership have been key to Oracle’s success, including significant acquisitions such as Sun Microsystems that expanded the company’s footprint in the tech industry.
Oracle’s database technology revolutionized how businesses manage data, and under Ellison’s guidance, it evolved into a dominant player in enterprise software and cloud infrastructure.
In 2025, Larry Ellison’s fortune surged dramatically, propelled by a remarkable rise in Oracle’s stock price. This was triggered by soaring demand for Oracle’s cloud computing and artificial intelligence services. A landmark $300 billion cloud deal with OpenAI boosted Oracle’s revenue outlook and sent shares up over 40% in a single day.
This spike added more than $100 billion to Ellison’s net worth, briefly making him the world’s richest person.
FAQs:
Q1: What is Larry Ellison building in Oxford?
A: A $1.3 billion research campus called the Ellison Institute of Technology.
Q2: Why is Ellison buying historic sites like The Eagle and Child pub?
A: To preserve Oxford’s cultural heritage while integrating it into his innovation-focused vision.
Tools & Platforms
Why Micron Technology (MU) Is Up 19.7% After AI-Driven Demand Boosts Analyst Optimism and Data Center Revenue

- In the past week, Micron Technology attracted widespread analyst upgrades and sector optimism due to robust demand for advanced memory chips powering artificial intelligence applications and data centers. Analysts highlighted Micron’s rapidly rising data center revenue and its strengthened position as an essential supplier for AI infrastructure solutions.
- A unique aspect is that Micron’s momentum has been reinforced by major enterprise customers’ commentary, especially Oracle’s, reflecting industry-wide confidence in continued AI-driven demand for memory products through at least 2026.
- We’ll explore how these positive demand signals from large AI customers impact Micron’s investment narrative and growth outlook.
Trump has pledged to “unleash” American oil and gas and these 22 US stocks have developments that are poised to benefit.
Micron Technology Investment Narrative Recap
To be a Micron Technology shareholder, you need to believe in ongoing strength in AI-driven data center demand that can offset the inherent volatility and competition of the memory chip industry. The latest surge in analyst upgrades and sector optimism has sharpened focus on Micron’s position in the AI supply chain, but it does not eliminate the cyclical risks still present in both DRAM and NAND markets that could impact earnings momentum if demand trends shift unexpectedly.
Among recent announcements, Micron’s raised Q4 2025 earnings guidance stands out as closely linked to the surge in AI-fueled memory demand, reinforcing confidence behind current analyst enthusiasm. The updated outlook, with expected revenue of US$11.2 billion and EPS of US$2.64, reflects tangible benefits from AI, making near-term results a primary market catalyst in the coming weeks.
Yet, despite this tailwind, investors should also consider how quickly competition from other memory giants could…
Read the full narrative on Micron Technology (it’s free!)
Micron Technology’s narrative projects $53.6 billion in revenue and $13.6 billion in earnings by 2028. This requires 16.6% yearly revenue growth and a $7.4 billion earnings increase from $6.2 billion today.
Uncover how Micron Technology’s forecasts yield a $150.57 fair value, a 4% downside to its current price.
Exploring Other Perspectives
Fifty members of the Simply Wall St Community estimate Micron’s fair value between US$71.48 and US$195.67 per share. However, continued robust demand for advanced DRAM and HBM in AI data centers could prove pivotal for future revenue and margin strength, so consider a range of market outlooks.
Explore 50 other fair value estimates on Micron Technology – why the stock might be worth as much as 24% more than the current price!
Build Your Own Micron Technology Narrative
Disagree with existing narratives? Create your own in under 3 minutes – extraordinary investment returns rarely come from following the herd.
Want Some Alternatives?
Every day counts. These free picks are already gaining attention. See them before the crowd does:
This article by Simply Wall St is general in nature. We provide commentary based on historical data
and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your
financial situation. We aim to bring you long-term focused analysis driven by fundamental data.
Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material.
Simply Wall St has no position in any stocks mentioned.
New: Manage All Your Stock Portfolios in One Place
We’ve created the ultimate portfolio companion for stock investors, and it’s free.
• Connect an unlimited number of Portfolios and see your total in one currency
• Be alerted to new Warning Signs or Risks via email or mobile
• Track the Fair Value of your stocks
Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team@simplywallst.com
Tools & Platforms
California Finalizes 2025 CCPA Rules on Data & AI Oversight

If you’ve ever been rejected for a job by an algorithm, denied an apartment by a software program, or had your health coverage questioned by an automated system, California just voted to change the rules of the game. On July 24, 2025, the California Privacy Protection Agency (CPPA) voted to finalize one of the most consequential privacy rulemakings in U.S. history. The new regulations—covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT)—are the product of nearly a year of public comment, political pressure, and industry lobbying.
They represent the most ambitious expansion of U.S. privacy regulation since voters approved the California Privacy Rights Act (CPRA) in 2020 and its provisions took effect in 2023, adding for the first time binding obligations around automated decision-making, cybersecurity audits, and ongoing risk assessments.
How We Got Here: A Contentious Rulemaking
The CPPA formally launched the rulemaking process in November 2024. At stake was how California would regulate technologies often grouped under the “AI” umbrella-term. The CPPA opted to focus narrowly on automated decision-making technology (ADMT), rather than attempting to define AI in general. This move generated both relief and frustration among stakeholders. The groups weighing in ranged from Silicon Valley giants to labor unions and gig workers, reflecting the numerous corners of the economy that automated decision-making touches.
Early drafts had explicitly mentioned “artificial intelligence” and “behavioral advertising.” By the time the final rules were adopted, those references were stripped out. Regulators stated that they sought to avoid ambiguity and not encompass too many technologies. Critics said the changes weakened the rules.
The comment period drew over 575 pages of submissions from more than 70 organizations and individuals, including tech companies, civil society groups, labor advocates, and government officials. Gig workers described being arbitrarily deactivated by opaque algorithms. Labor unions argued the rules should have gone further to protect employees from automated monitoring. On the other side, banks, insurers, and tech firms warned that the regulations created duplicative obligations and legal uncertainty.
The CPPA staff defended the final draft as one that “strikes an appropriate balance,” while acknowledging the need to revisit these rules as technology and business practices evolve. After the July 24 vote, the agency formally submitted the package to the Office of Administrative Law, which has 30 business days to review it for procedural compliance before the rules take effect.
At today’s meeting, the CPPA Board unanimously voted to adopt a proposed rulemaking package on ADMT, cybersecurity audits, risk assessments, insurance, and CCPA updates. Now, the proposed regulations will be filed with the Office of Administrative Law. pic.twitter.com/A8IB38E66l
— California Privacy Protection Agency (@CalPrivacy) July 24, 2025
Scroll to continue reading
Automated Decision-Making Technology (ADMT): Redefining AI Oversight
The centerpiece of the regulations is the framework for ADMT. The rules define ADMT as “any technology that processes personal information and uses computation to replace human decisionmaking, or substantially replace human decisionmaking.”
The CPPA applies these standards to what it calls “significant decisions:” choices that determine whether someone gets a job or contract, qualifies for a loan, secures housing, is admitted to a school, or receives healthcare. In practice, that means résumé-screening algorithms, tenant-screening apps, loan approval software, and healthcare eligibility tools all fall within the law’s scope.
Companies deploying ADMT for significant decisions will face several new obligations. They must provide plain-language pre-use notices so consumers understand when and how automated systems are being applied. Individuals must also be given the right to opt out or, at minimum, appeal outcomes to a qualified human reviewer with real authority to reverse the decision. Businesses are further required to conduct detailed risk assessments, documenting the data inputs, system logic, safeguards, and potential impacts. In short, if an algorithm decides whether you get hired, approved for a loan, or accepted into housing, the company has to tell you up front, offer a meaningful appeal, and prove that the system isn’t doing more harm than good. Liability also cannot be outsourced: with the business itself, firms remain responsible even when they rely on third-party vendors.
Some tools are excluded—like firewalls, anti-malware, calculators, and spreadsheets—unless they are actually used to make the decision. Additionally, the CPPA tightened what counts as “meaningful human review.” Reviewers must be able to interpret the system’s output, weigh other relevant information, and have genuine authority to overturn the result.
Compliance begins on January 1, 2027.
Cybersecurity Audits: Scaling Expectations
Another pillar of the new rules is the requirement for annual cybersecurity audits. For the first time under state law, companies must undergo independent assessments of their security controls.
The audit requirement applies broadly to larger data-driven businesses. It covers companies with annual gross revenue exceeding $26.6 million that process the personal information of more than 250,000 Californians, as well as firms that derive half or more of their revenue from selling or sharing personal data.
Audits must be conducted by independent professionals who cannot report to a Chief Information Security Officer (CISO) or other executives directly responsible for cybersecurity to ensure objectivity.
The audits cover a comprehensive list of controls, from encryption and multifactor authentication to patch management and employee training, and must be certified annually to the CPPA or Attorney General if requested.
Deadlines are staggered:
- April 1, 2028: $100M+ businesses
- April 1, 2029: $50–100M businesses
- April 1, 2030: <$50M businesses
By codifying this framework and embedding these requirements into law, California is effectively setting a de facto national cybersecurity baseline: one that may exceed federal NIST standards and ripple into vendor contracts nationwide. For businesses, these audits won’t just be about checking boxes: they could become the new cost of entry for doing business in California. Because companies can’t wall off California users from the rest of their customer base, these standards are likely to spread nationally through vendor contracts and compliance frameworks.
Privacy Risk Assessments: Accountability in High-Risk Processing
The regulations also introduce mandatory privacy risk assessments, required annually for companies engaged in high-risk processing.
Triggering activities include:
- Selling or sharing personal information
- Processing sensitive personal data (including neural data, newly classified as sensitive)
- Deploying ADMT for significant decisions
- Profiling workers or students
- Training ADMT on personal data
Each assessment must document categories of personal information processed, explain the purpose and benefits, identify potential harms and safeguards, and be submitted annually to the CPPA starting April 21, 2028, with attestations under penalty of perjury (a high-stakes accountability mechanism). This clause is designed to prevent “paper compliance.” By requiring executives to sign off under penalty of perjury, California is telling companies this isn’t paperwork. Leaders will be personally accountable if their systems mishandle sensitive data. Unlike voluntary risk assessments, California’s system ties accountability directly to the personal liability of signatories.
Other Notable Provisions
Beyond these headline rules, the CPPA also addressed sector-specific issues and tied in earlier reforms. For the insurance industry, the regulations clarify how the CCPA applies to companies that routinely handle sensitive personal and health data—an area where compliance expectations were often unclear. The rules also fold in California’s Delete Act, which takes effect on August 1, 2026. That law will give consumers a single, one-step mechanism to request deletion of their personal information across all registered data brokers, closing a major loophole in the data marketplace and complementing the broader CCPA framework. Together, these measures reinforce California’s role as a privacy trendsetter, creating tools that other states are likely to copy as consumers demand similar rights.
Implications for California
California has long served as the nation’s privacy laboratory, pioneering protections that often ripple across the country. This framework places California among the first U.S. jurisdictions to regulate algorithmic governance. With these rules, the state positions itself alongside the EU AI Act and the Colorado AI Act, creating one of the world’s most demanding compliance regimes.
However, the rules also set up potential conflict with the federal government. The America’s AI Action Plan, issued earlier this year, emphasizes innovation over regulation and warns that restrictive state-level rules could jeopardize federal AI funding decisions. This tension may play out in future policy disputes.
For California businesses, the impact is immediate. Companies must begin preparing governance frameworks, reviewing vendor contracts, and updating consumer-facing disclosures now. These compliance efforts build on earlier developments in California privacy law, including the creation of a dedicated Privacy Law Specialization for attorneys. This specialization will certify legal experts equipped to navigate the state’s intricate web of statutes and regulations, from ADMT disclosures to phased cybersecurity audits. Compliance will be expensive, but it will also drive demand for new privacy officers, auditors, and legal specialists. Mid-sized firms may struggle, while larger companies may gain an edge by showing early compliance. For businesses outside California, the ripple effects may be unavoidable because national companies will have to standardize around the state’s higher bar.
The CPPA’s finalized regulations mark a structural turning point in U.S. privacy and AI governance. Obligations begin as early as 2026 and accelerate through 2027–2030, giving businesses a narrow window to adapt. For consumers, the rules promise greater transparency and the right to challenge opaque algorithms. For businesses, they establish California as the toughest compliance environment in the country, forcing firms to rethink how they handle sensitive data, automate decisions, and manage cybersecurity. California is once again setting the tone for global debates on privacy, cybersecurity, and AI. Companies that fail to keep pace will not only face regulatory risk but could also lose consumer trust in the world’s fifth-largest economy. Just as California’s auto emissions standards reshaped national car design, its privacy rules are likely to shape national policy on data and AI. Other states will borrow from California, and Washington will eventually have to decide whether to match it or rein it in.
What starts in Sacramento rarely stays there. From Los Angeles to Silicon Valley, California just set the blueprint for America’s data and AI future.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries