Connect with us

AI Research

Google introduces new state-of-the-art open models

Published

on


Responsible by design

Gemma is designed with our AI Principles at the forefront. As part of making Gemma pre-trained models safe and reliable, we used automated techniques to filter out certain personal information and other sensitive data from training sets. Additionally, we used extensive fine-tuning and reinforcement learning from human feedback (RLHF) to align our instruction-tuned models with responsible behaviors. To understand and reduce the risk profile for Gemma models, we conducted robust evaluations including manual red-teaming, automated adversarial testing, and assessments of model capabilities for dangerous activities. These evaluations are outlined in our Model Card.

We’re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications. The toolkit includes:

  • Safety classification: We provide a novel methodology for building robust safety classifiers with minimal examples.
  • Debugging: A model debugging tool helps you investigate Gemma’s behavior and address potential issues.
  • Guidance: You can access best practices for model builders based on Google’s experience in developing and deploying large language models.

Optimized across frameworks, tools and hardware

You can fine-tune Gemma models on your own data to adapt to specific application needs, such as summarization or retrieval-augmented generation (RAG). Gemma supports a wide variety of tools and systems:

  • Multi-framework tools: Bring your favorite framework, with reference implementations for inference and fine-tuning across multi-framework Keras 3.0, native PyTorch, JAX, and Hugging Face Transformers.
  • Cross-device compatibility: Gemma models run across popular device types, including laptop, desktop, IoT, mobile and cloud, enabling broadly accessible AI capabilities.
  • Cutting-edge hardware platforms: We’ve partnered with NVIDIA to optimize Gemma for NVIDIA GPUs, from data center to the cloud to local RTX AI PCs, ensuring industry-leading performance and integration with cutting-edge technology.
  • Optimized for Google Cloud: Vertex AI provides a broad MLOps toolset with a range of tuning options and one-click deployment using built-in inference optimizations. Advanced customization is available with fully-managed Vertex AI tools or with self-managed GKE, including deployment to cost-efficient infrastructure across GPU, TPU, and CPU from either platform.

Free credits for research and development

Gemma is built for the open community of developers and researchers powering AI innovation. You can start working with Gemma today using free access in Kaggle, a free tier for Colab notebooks, and $300 in credits for first-time Google Cloud users. Researchers can also apply for Google Cloud credits of up to a collective $500,000 to accelerate their projects.

Getting started

You can explore more about Gemma and access quickstart guides on ai.google.dev/gemma.

As we continue to expand the Gemma model family, we look forward to introducing new variants for diverse applications. Stay tuned for events and opportunities in the coming weeks to connect, learn and build with Gemma.

We’re excited to see what you create!



Source link

AI Research

AI to reshape India’s roads? Artificial intelligence can take the wheel to fix highways before they break, ETInfra

Published

on


From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

In India, a pothole is rarely just a pothole. It is a metaphor, a mood and sometimes, a meme. It is the reason your cab driver mutters about karma and your startup founder misses a pitch meeting because the expressway has turned into a swimming pool. But what if roads could detect their own distress, predict failures before they happen, and even suggest how to fix them?

That is not science-fiction but the emerging reality of AI-powered infrastructure.

According to KPMG’s 2025 report AI-powered road infrastructure transformation- Roads 2047, artificial intelligence is slowly reshaping how India builds, maintains, and governs its roads. From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

From concrete to cognition

India’s road network spans over 6.3 million kilometers – second only to the United States. As per KPMG, AI is now being positioned not just as a tool but as a transformational layer. Technologies like Geographic Information System (GIS), Building Informational Modelling (BIM) and sensor fusion are enabling digital twins – virtual replicas of physical assets that allow engineers to simulate stress, traffic and weather impact in real time. The National Highway Authority of India (NHAI) has already integrated AI into its Project Management Information System (PMIS), using machine learning to audit construction quality and flag anomalies.

Autonomous infrastructure in action

Across urban India, infrastructure is beginning to self-monitor. Pune’s Intelligent Traffic Management System (ITMS) and Bengaluru’s adaptive traffic control systems are early examples of AI-driven urban mobility.

Meanwhile, AI-MC, launched by the Ministry of Road Transport and Highways (MoRTH), uses GPS-enabled compactors and drone-based pavement surveys to optimise road construction.

Beyond cities, state-level initiatives are also embracing AI for infrastructure monitoring. As reported by ETInfra earlier, Bihar’s State Bridge Management & Maintenance Policy, 2025 employs AI and machine learning for digital audits of bridges and culverts. Using sensors, drones, and 3D digital twins, the state has surveyed over 12,000 culverts and 743 bridges, identifying damaged structures for repair or reconstruction. IIT Patna and Delhi have been engaged for third-party audits, showing how AI can extend beyond roads to critical bridge infrastructure in both urban and rural contexts.

While these examples demonstrate the potential of AI-powered maintenance, challenges remain. Predictive maintenance, KPMG notes, could reduce lifecycle costs by up to 30 per cent and improve asset longevity, but much of rural India—nearly 70 per cent of the network—still relies on manual inspections and paper-based reporting.

Governance and the algorithm

India’s road safety crisis is staggering: over 1.5 lakh deaths annually. AI could be a game-changer. KPMG estimates that intelligent systems can reduce emergency response times by 60 per cent, and improve traffic efficiency by 30 per cent. AI also supports ESG goals— enabling carbon modeling, EV corridor planning, and sustainable design.

But technology alone won’t fix systemic gaps. The promise of AI hinges on institutional readiness – spanning urban planning, enforcement, and civic engagement.

While NITI Aayog has outlined a national AI strategy, and MoRTH has initiated digital reforms, state-level adoption remains fragmented. Some states have set up AI cells within their PWDs; others lack the technical capacity or policy mandate.

KPMG calls for a unified governance framework — one that enables interoperability, safeguards data, and fosters public-private partnerships. Without it, India risks building smart systems on shaky foundations.

As India looks towards 2047, the road ahead is both digital and political. And if AI can help us listen to our roads, perhaps we’ll finally learn to fix them before they speak in potholes.

  • Published On Sep 4, 2025 at 07:10 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

Get updates on your preferred social platform

Follow us for the latest news, insider access to events and more.



Source link

Continue Reading

AI Research

Mistral AI Nears Close of Funding Round Lifting Valuation to $14B

Published

on

By


Artificial intelligence (AI) startup Mistral AI is reportedly nearing the close of a funding round in which it would raise €2 billion (about $2.3 billion) and be valued at €12 billion (about $14 billion).

This would be Mistral AI’s first fundraise since a June 2024 round in which it was valued at €5.8 billion, Bloomberg reported Wednesday (Sept. 3), citing unnamed sources.

Mistral AI did not immediately reply to PYMNTS’ request for comment.

According to the Bloomberg report, Mistral AI, which is based in France, is developing a chatbot called Le Chat that is tailored to European user as well as other AI services to compete with the dominant ones from the United States and China.

It was reported on Aug. 3 that Mistral AI was targeting a $10 billion valuation in a funding round in which it would raise $1 billion.

In June, it was reported that the company’s revenues had increased several times over since it raised funds in 2024 and were on pace to exceed $100 million a year for the first time.

PYMNTS reported in June 2024, at the time of Mistral AI’s most recent funding round, that the AI startup raised $113 million in seed funding in June 2023, weeks after it was launched, secured an additional $415 million in a funding round in December 2023 in which it was valued at around $2 billion, and then raised $640 million in the round that propelled its valuation to $6 billion.

“We are grateful to our new and existing investors for their continued confidence and support for our global expansion,” Mistral AI said in a post on LinkedIn announcing the June 2024 funding round. “This will accelerate our roadmap as we continue to bring frontier AI into everyone’s hands.”

In June, Mistral AI and chipmaker Nvidia announced a partnership to develop next-generation AI cloud services in France.

The initiative centers around building AI data centers in France using Nvidia chips and will expand Mistral’s businesses model, transitioning the AI startup from being a model developer to being a vertically integrated AI cloud provider, PYMNTS reported at the time.



Source link

Continue Reading

AI Research

PPS Weighs Artificial Intelligence Policy

Published

on


Portland Public Schools folded some guidance on artificial intelligence into its district technology policy for students and staff over the summer, though some district officials say the work is far from complete.

The guidelines permit certain district-approved AI tools “to help with administrative tasks, lesson planning, and personalized learning” but require staff to review AI-generated content, check accuracy, and take personal responsibility for any content generated.

The new policy also warns against inputting personal student information into tools, and encourages users to think about inherent bias within such systems. But it’s still a far cry from a specific AI policy, which would have to go through the Portland School Board.

Part of the reason is because AI is such an “active landscape,” says Liz Large, a contracted legal adviser for the district. “The policymaking process as it should is deliberative and takes time,” Large says. “This was the first shot at it…there’s a lot of work [to do].”

PPS, like many school districts nationwide, is continuing to explore how to fold artificial intelligence into learning, but not without controversy. AsThe Oregonian reported in August, the district is entering a partnership with Lumi Story AI, a chatbot that helps older students craft their own stories with a focus on comics and graphic novels (the pilot is offered at some middle and high schools).

There’s also concern from the Portland Association of Teachers. “PAT believes students learn best from humans, instead of AI,” PAT president Angela Bonilla said in an Aug. 26 video. “PAT believes that students deserve to learn the truth from humans and adults they trust and care about.”

Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office.

Help us dig deeper.





Source link

Continue Reading

Trending