Connect with us

AI Insights

Three principles for growing an AI ecosystem that works for people and planet

Published

on


By now, many investors, organizations, and entrepreneurs are deeply committed to building an Artificial Intelligence (AI) ecosystem that prioritizes agency, equity, and sustainability for people and the planet. Yet current investment options remain limited. Governments and funders looking to support public benefit AI face an unsatisfying choice of investing in costly “sovereign AI” infrastructure (high-end compute, foundation models, energy) without clear paths to securing strategic autonomy or matching frontier capabilities and applications of U.S. or Chinese hyperscalers, or a scattered portfolio of downstream “AI for good” applications—many of which feel like solutions in search of problems. Neither option helps communities generate the context-rich data needed to tackle shared challenges, and both fail to adequately address the fundamental concentration of power that persists in a handful of private companies.

An alternative approach to building AI systems, grounded in the science of collective intelligence (CI), can address these shortcomings at once.

As we explored together recently in discussions with entrepreneurs and investors at Human+Tech Week, it is now technically feasible (with advances in privacy-preserving modelling techniques) and inexpensive (due to ever-decreasing costs of computation and software) to shift from building centralized compute clusters and large language models (LLMs) to building smaller, decentralized local language models (or “LocalMs”) that capture and amplify—rather than extract—the intelligence of individuals, teams, and communities. The ultimate vision for this approach would be complementing efforts to achieve monolithic Artificial General Intelligence with a bottom-up movement to grow ecosystems of intelligence: thousands or even millions of intelligent communities with sovereignty over their data and culture, meaningful equity in AI infrastructure and applications, and agency to use these systems to self-organize around shared priorities.

Elements of this vision already exist in technical prototypes, policy proposals, and committed communities of technology entrepreneurs, scientists, and investors. What is needed is a concerted effort across these actors to stitch together a more coherent field capable of innovating impactful use cases that cut through the noise. To this end, we distilled three guiding principles for framing, designing, and coordinating an AI ecosystem that works for people and planet.

1. Frame AI as a social technology

How we talk about AI is important. AI that works for people must recognize human contributions to AI. LLMs are pre-trained on vast troves of human-created and human-tagged internet content; refined through reinforcement learning with human feedback, and further tuned through human usage patterns. AI consistently performs best as a hybrid system combining machine speed and scale with collective human expertise and know-how.

And yet, these human contributions to AI are absent in mainstream conversations. New narratives are needed to shift AI policy debates from a paradigm of protecting humans from AI to ensuring that human contributions to hybrid human-AI systems—data, deliberation, judgement, intuition, and social context—are protected and fairly valued. As some of us (TK, AP, MR) have recently proposed, there is an opportunity to reframe generative AI as generative collective intelligence or “GenCI”: a social technology that combines algorithmic capacity with human expertise to address complex, real-world challenges that humans or machines could not address alone.

2. Design AI as loyal to human agency

Humans are agents and human agency should be the central concern in AI development—and yet, amid today’s enthusiasm for autonomous AI systems or “agentic AI,” these fundamental truisms require explicit defence. Investors, policymakers, and end-users should insist on AI algorithms, architectures, and approaches that amplify rather than extract human agency and social intelligence.

It is possible to build algorithms that capture and elevate shared beliefs, purpose, and action potential in groups and organizations—as developed by platforms like Common Good AI. In a similar vein, approaches like Pol.is or Deliberation.io use summarisation models and adaptive polling to scale inclusive, grounded dialogue while preserving nuance and diversity of voices. Approaches to human-AI teaming like vibe teaming can position AI tools to support creativity and quality of human-to-human problem-solving.

Emerging AI agents, meanwhile, can and should be “loyal by design”—treated as fiduciaries for human individuals, teams, or communities (rather than for companies alone)—curating data and training LocalMs on their behalf. Innovative data governance (following models like the Human Genome Project) and privacy-preserving machine learning techniques can help aggregate LocalMs into larger, community-governed ensembles and enterprises like trusts or cooperatives.

3. Coordinate around big-bet applications

Innovative applications can demonstrate why growing an alternative AI ecosystem matters. AI systems grounded in CI science and design principles will have a natural competitive advantage addressing challenges that no single actor can solve alone: like a regional or global green‐energy trading platform that uses LocalMs to transparently validate and exchange carbon‐intensity data from the ground up; trusted AI‐driven public services that use LocalMs to federate sensitive personal and government datasets; or scale-up of pioneering prototypes like Interspecies Money—which uses CI design principles to build AI that represents and values the agency of non-human life. To help mobilize the necessary scale of infrastructure (high-end compute, data, and talent) required to develop world-leading use cases, an “Airbus for AI” model—a public–private consortium of middle powers’ national AI labs—could collaborate to take these ideas to market as public utilities (see initial proposals for Asia and Europe).



Source link

AI Insights

AI Already Surpasses Average Human Ability In Many Domains: Google DeepMind Scientist

Published

on


Artificial intelligence has already exceeded human abilities in certain areas, according to Google DeepMind’s chief scientist, Jeff Dean. He observes that many of today’s leading AI models are capable of handling a wide variety of “non-physical tasks” better than an average person can.

During his appearance on the Moonshot Podcast, Dean highlighted that most individuals find it quite challenging when faced with tasks they are not used to doing. In contrast, he pointed out that modern AI systems are often able to tackle these unfamiliar problems with a fair degree of success.

“Most people are not that good at a random task that they’ve never done before, and some of the models we have today are actually pretty reasonable at most things,” he said. 

Dean made it clear that there is a significant difference between outperforming everyday individuals and reaching the standards set by leading experts. He emphasised that although AI can manage a broad spectrum of cognitive tasks with competence, these systems still have their shortcomings and should not be considered flawless.

“They will fail at a lot of things, they’re not human, expert level in some things,” he said.

Dean noted AI’s remarkable capacity to apply its knowledge across a variety of fields. This is a skill that many people find difficult to achieve as effectively, according to the seasoned software engineer.

When questioned about whether computers might soon outpace humans in generating scientific or engineering breakthroughs, he indicated that such a transition is already underway in certain specialised areas. “We’re actually probably already close to that in some domains,” he said.

Another aspect he touched upon in the podcast is his hesitation to discuss artificial general intelligence. “The reason I tend to steer away from AGI conversations is lots of people have very different definitions of it, and the difficulty of the problem varies by factors of a trillion,” the top scientist highlighted.

Demis Hassabis, CEO of Google DeepMind and Dean’s boss, holds a more hopeful view on AGI. In a recent interview with WIRED, he expressed confidence that a major breakthrough in AGI could be realised within the next five to ten years, signalling a significant advance in AI capabilities.

Google DeepMind is the primary IA research lab of the tech giant. It was formed in 2023 after the merger of DeepMind and Google Brain, the company’s in-house AI research team.  



Source link

Continue Reading

AI Insights

Is AI making human research obsolete? This library lecture has answers.

Published

on


SAGINAW, MI — A thought-provoking examination of artificial intelligence’s influence on reading, writing, and research will be presented from 6:30 p.m. to 7:30 p.m. on Wednesday, Sept. 24, at Saginaw-based Hoyt Public Library.

Erik Trump, a professor of political science at Saginaw Valley State University, will deliver a lecture, titled “AI Reading, Writing, and Research: What’s Left Behind for Us?”

The presentation will explore whether AI represents a pivotal moment for public knowledge, literacy, and libraries, organizers said.

Trump, who serves as director of the Center for Excellence in Teaching and Learning at SVSU, will demonstrate AI tools during the presentation while examining how generative AI is reshaping fundamental aspects of learning and research.

The lecture will address both contemporary concerns and long-standing cultural anxieties about artificial intelligence.

Trump brings extensive expertise to the topic, with research interests spanning politics, culture, and technology. His recent work has focused on AI’s transformation of education and everyday life.

He has authored several books, including “The Architecture of Survival: Setting and Politics in Apocalypse Films,” co-written with a former student.

The event at the library, 505 Janes in Saginaw, is free and open to the public, but registration is required.

Interested attendees can register through the day of the event at saginawlibrary.org/events.

Generative AI was used to organize and structure this story, based on data provided by Hoyt Library organizers. It was reviewed and edited by MLive staff.

If you purchase a product or register for an account through a link on our site, we may receive compensation. By using this site, you consent to our User Agreement and agree that your clicks, interactions, and personal information may be collected, recorded, and/or stored by us and social media and other third-party partners in accordance with our Privacy Policy.



Source link

Continue Reading

AI Insights

Big Tech’s $4 Trillion Artificial Intelligence (AI) Spending Spree Could Make These 3 Chip Stocks Huge Winners

Published

on


Big tech companies continue to race to build out AI capacity.

Nvidia (NVDA -3.38%) recently said it expects artificial intelligence (AI) infrastructure spending to jump to between $3 trillion and $4 trillion by the end of the decade. That’s just a massive number. Cloud computing and other big technology companies continue to race to build out AI capacity, which puts chipmakers in an enviable position.

Nvidia has been the big winner so far, but it’s not the only one. Let’s look at the three chipmakers set to benefit most from this $3 trillion opportunity.

Nvidia

Nvidia is working at the center of AI. Its graphics processing units (GPUs) went from powering video games to becoming the standard for training large language models, and the company managed to turn that into a wide moat. Its CUDA software platform was the key to this happening. By making it free and getting it into research labs and universities early on, Nvidia ensured that developers learned to program GPUs on CUDA. Once that happened, companies were locked into its software ecosystem.

Nvidia has been just as smart on the networking side as well. Its proprietary NVLink connection allows GPUs to work together as a single unit, a huge benefit with AI workloads. Meanwhile, its acquisition of Mellanox gave it even more strength in networking, ensuring its chips could support increasingly massive AI clusters. The strength of its networking portfolio showed up last quarter, with networking data center revenue nearly doubling to $7.3 billion.

With both software and networking advantages, Nvidia is positioned to continue to be the leader in the AI infrastructure buildout. While it may not hold its 90% GPU market share, it’s still the company to beat.

Advanced Micro Devices

Advanced Micro Devices (AMD -3.52%) lives in Nvidia’s shadow, but the market is shifting in a way that plays to AMD’s strengths. Training dominated the first wave of AI, and Nvidia’s CUDA gave it the edge. However, inference is now where demand is growing fastest, and AMD has already won some important business here. Several of the largest AI companies are using its GPUs for inference, and it counts seven of the top 10 AI players as customers.

AMD is also part of the UALink Consortium, which is trying to build an open interconnect standard to rival Nvidia’s NVLink. If that happens, it could give data centers more options for building out clusters and cut into one of Nvidia’s advantages. That’s still getting started, but it shows how AMD is working with others to narrow Nvidia’s moat.

The company isn’t just about GPUs, either. Its EPYC central processing units (CPUs) is gaining share in data centers, and it still has a strong PC and gaming chip business. AMD doesn’t need to overtake Nvidia to win. Just getting a bigger slice of inference demand, while keeping its CPU business growing, can make it one of the bigger long-term beneficiaries of the AI buildout.

Image source: Getty Images.

Broadcom

Broadcom (AVGO -3.65%) has been taking a different approach when it comes to the AI buildout, but it has also seen explosive data center growth. While Nvidia and AMD battle over GPUs, Broadcom built a strong position in the data center networking space. Its Ethernet switches, optical interconnects, and digital signal processors are critical in moving massive amounts of data, and as AI clusters grow in size, networking needs grow right alongside them. That’s why its AI networking revenue jumped 70% last quarter.

However, it may have an even bigger opportunity with custom AI chips. Broadcom has long been a leader in the design of application-specific integrated circuits, and it has recently begun working with hyperscalers (companies that operate massive data centers) that want to boost performance and lower costs by developing chips tailored for their AI workloads.

It helped Alphabet create its tensor processing units, and now it’s working with several other large customers on new designs. Management says the three customers furthest along could each deploy 1 million clusters by its fiscal 2027, which would represent a $60 billion to $90 billion opportunity. That number doesn’t even include newer relationships, such as the one it’s established with Apple.

On top of all that, Broadcom has VMware. The unit is shifting to subscriptions and helping enterprises run AI across hybrid and multicloud environments, providing Broadcom with another avenue of growth. Put everything together, and you have another chip company set to benefit enormously as AI infrastructure spending ramps up.

Geoffrey Seiler has positions in Alphabet. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Apple, and Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.



Source link

Continue Reading

Trending