Tools & Platforms
Google and California Community Colleges launch largest higher education AI partnership in the US, equipping millions of students with access to free training

In the largest higher education deal of its kind in the US, Google is investing in workforce development for the future, putting California’s community college students at the forefront of the AI-driven economy.
“This collaboration with Google is a monumental step forward for the California Community Colleges,” explains Don Daves-Rougeaux, Senior Advisor to the Chancellor of the California Community Colleges on Workforce Development, Strategic Partnerships, and GenAI.
“Providing our students with access to world-class AI training and professional certificates ensures they have the skills necessary to thrive in high-growth industries and contribute to California’s economic prosperity. This partnership directly supports our Vision 2030 commitment to student success and workforce readiness. Additionally, offering access to AI tools with data protections and advanced functionality for free ensures that all learners have equitable access to the tools they need to leverage the skills they’re learning, and saves California’s community colleges millions of dollars in potential tool costs.”
All students, faculty, staff and classified professionals at the colleges will be able to access Gemini, Google’s generative AI tool, with data protections, to ensure they can safely use AI tools.
All students and faculty will also receive free access to Google Career Certificates, Google AI Essentials, and Prompting Essentials, providing practical training for in-demand jobs.
“Technology skills, especially in areas like artificial intelligence, are critical for the future workforce,” adds Bryan Lee, Vice President of Google for Education Go-to-Market. “We are thrilled to partner with the California Community Colleges, the nation’s largest higher education system, to bring valuable training and tools like Google Career Certificates, AI Essentials, and Gemini to millions of students. This collaboration underscores our commitment to creating economic opportunity for everyone.”
The ETIH Innovation Awards 2026
The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.
Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.
Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.
Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.
Tools & Platforms
Common Pitfalls That Keep Projects From Taking Off

Wrecking Ball Approaching AI
getty
The promise of AI in the world of tax is compelling: streamlined compliance, predictive insights, and newfound efficiency. Yet for all the enthusiasm, many tax departments find their ambitious AI projects grounded before they ever reach cruising altitude. The reasons for this often have less to do with the technology itself and more to do with the realities of data, people, and processes.
Starting Smart, Not Big
The journey from understanding AI concepts to actually implementing them is where the first stumbles often occur. A common misstep is starting too big. Tax leaders sometimes try to redesign entire processes at once, hoping to deliver an end-to-end transformation right out of the gate. The result is usually the opposite: projects drag on, resources are stretched thin, and momentum is lost.
Another common trap is picking the wrong first project, jumping straight into high-stakes initiatives that require heavy integrations, while ignoring smaller wins like data extraction. The safer bet is to start with a narrow, low-risk pilot like automating some spreadsheet workflows. It’s the kind of pilot you can complete in a month or two, and if it doesn’t work out, nothing’s lost and you simply fall back on your manual process.
There’s also a tendency to focus on the tool instead of the outcome. AI gets a lot of attention, and some teams feel pressure to use it even when a simpler automation approach would do the job. The label “AI-powered” shouldn’t matter as much as whether the solution solves the problem effectively.
In short, the common mistakes are clear: trying to boil the ocean, chasing perfection too soon, or letting the hype around AI dictate decisions. The smarter path is to start small and scale thoughtfully from there.
Too Many Projects, Not Enough Progress
With all the buzz around generative AI, many tax teams fall into the trap of running pilot after pilot. For example, a tax team might launch pilots for AI-driven invoice scanning, chatbot support for tax queries, and predictive analytics for audit risks. Each pilot sounds promising, but with limited staff and budget, none of them gets the attention needed to succeed. Six months later, the team has three unfinished projects, no live solution, and a frustrated leadership asking why AI hasn’t delivered. This flurry of activity creates the illusion of progress but results in a trail of half-finished experiments.
This “pilot fatigue” often comes from top-down pressure to be seen as innovating with AI. Leaders want momentum, but without focus, the energy gets diluted. Instead of proving value, the department ends up with scattered efforts and no clear win to point to.
The way forward is prioritization. Not every idea deserves a pilot, and not every pilot should move ahead at the same time. The most successful teams pick a few feasible projects, give them proper resources, and see them through beyond the prototype stage. In the end, it’s better to have one working solution in production than a stack of unfinished experiments.
From Prototype to Production
A common stumbling block for tax teams is underestimating the leap from prototype to production. Some estimates place the AI project failure rate as high as 80%, which is almost double the rate of corporate IT project failures. Building a proof of concept in a few weeks is one thing but turning it into a tool people rely on every day is something else entirely. This is where many AI projects stall and why so many never make it beyond the pilot stage.
The problem usually isn’t the technology itself. It’s the messy reality of moving from a controlled demo into a live environment. A prototype might run smoothly on a clean sample dataset, but in production the AI has to handle the company’s actual data that may be incomplete, inconsistent, or scattered across systems. Cleaning, organizing, and integrating that information is often most of the work, yet it’s rarely factored into early pilots.
Integration poses another challenge. A model that runs neatly in a Jupyter notebook isn’t enough. To be production-ready, it must plug into existing workflows, interact with legacy systems, and be supported with monitoring and error handling. That typically requires a broader team of engineers, operations specialists, even designers. These are roles many tax departments don’t have readily available. Without them, promising pilots get stuck in limbo.
The lesson is simple: tax teams need to plan from day one for data readiness, system integration, and long-term ownership. Without that preparation, pilots risk becoming one-off experiments that never make it past the demo stage.
Building on a Shaky Data Foundation
AI projects succeed or fail on the quality of their data. For tax teams, that’s often the first and toughest hurdle. Information is spread across different systems, stored in inconsistent formats, and sometimes incomplete. In many cases, key details are still buried in PDFs or email threads instead of structured databases. When an AI model has to work with that kind of patchy input, the results are bound to be flawed.
The unglamorous but essential part of AI is cleaning data and building reliable pipelines to feed information into the system. It’s rarely the exciting part, but it’s the foundation and without it, no model will perform consistently in production. The challenge is that, in the middle of all the AI hype, executives are often more willing to fund the “flashy” AI projects than the “boring” data cleanup work that actually makes them possible.
The takeaway is simple: treat data readiness as a core step in your AI journey, not an afterthought. A few weeks spent getting the data right can save months of wasted effort later.
Automating a Broken Process
A common pitfall for tax teams is dropping AI into processes that are already complex or inefficient. Automating a clunky workflow doesn’t fix the problems but it just makes them harder to manage.
AI adoption isn’t about layering a shiny new tool on top of old habits. It’s about rethinking the process as a whole. If AI takes over Task A, then Tasks B and C may need to change too. Reviewing the process upfront makes it easier to spot redundancies and cut steps that no longer add value.
The takeaway is simple: don’t just automate what you already do. Use AI as a chance to simplify and modernize. Otherwise, you risk hard-wiring inefficiency into the future of your operations.
The Trap of 100% Accuracy
Tax professionals are trained to value precision, so it’s no surprise many are reluctant to trust an AI tool unless it delivers flawless answers. The problem is, that bar is unrealistic with generative AI. These systems don’t “know” facts the way a database does. They predict words that are statistically likely to follow each other, which makes them great at generating fluent text but prone to confident-sounding mistakes, often called hallucinations.
Tax leaders need to understand this isn’t a bug that will soon be patched. It’s the nature of how these models work today. That doesn’t mean they’re unusable, but it does mean the goal shouldn’t be perfection. Instead, the focus should be on managing the risks and setting up safeguards that make AI outputs reliable enough for practical use.
On the technical side, tools like retrieval-augmented generation (RAG) can help by grounding AI answers in trusted documents instead of letting the model make things up. On the process side, though, there’s no way around human review. If the output involves regulations, case law, or financial figures, a qualified professional still needs to check it.
The real shift is in how we think about AI. Waiting for a system that’s 100% accurate isn’t realistic. The smarter approach is to design workflows where AI handles the heavy lifting and humans handle the judgment calls. When you set it up that way, AI doesn’t have to be perfect but reliable enough to speed things up without taking control out of human hands.
The Human Side of AI
For all the talk about data and algorithms, one of the biggest obstacles to AI adoption in tax departments may be people. Employees often view new technology as a threat, either to their jobs or to the way they’ve always worked. Fear of being replaced, or simple distrust in an unfamiliar tool, can stall an AI initiative before it even begins.
AI projects are often pitched as a way to save time and reclaim capacity by shifting people from repetitive, low-value tasks to higher-impact “strategic” work. In theory, that sounds ideal. But here’s the reality: not everyone naturally transitions from manual tasks to strategic ones. Can every compliance specialist suddenly become an advisor? Does the company actually need five more people in strategic roles instead of five handling tax filings?
When a department frees up dozens of hours of compliance work, there has to be a clear plan for how that capacity will be redeployed. Without one, employees are more likely to see AI as a threat than as a tool that supports them. For adoption to succeed, teams need to believe the technology will make their work more valuable and not make their roles redundant.
Pragmatism Over Hype
The promise of AI in tax is real but so are the pitfalls. Projects rarely stumble because the technology is broken. They stumble because of human, process, and data challenges that get overlooked.
Starting too big. Spreading resources across too many pilots. Ignoring data quality. Clinging to inefficient processes. Chasing perfection. Failing to bring people along. Any one of these can stall progress.
The way forward isn’t about shiny labels but about small wins that build trust and momentum. And it’s about shifting expectations. For tax departments, success won’t come from doing everything at once. It will come from doing the right things, in the right order, with the right support.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of any organizations with which the author is affiliated.
Tools & Platforms
SMARTSHOOTER Wins Innovation Award for AI-Driven Precision Fire Control Solutions
SMARTSHOOTER won the Innovation Award in the Army Technology Excellence Awards 2025 for its significant advancements in enhancing small arms accuracy and operational effectiveness through the integration of artificial intelligence and modular technology.
The Army Technology Excellence Awards honor the most significant achievements and innovations in the defense industry. Powered by GlobalData’s business intelligence, the Awards recognize the people and companies leading positive change and shaping the industry’s future.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
SMARTSHOOTER’s SMASH fire control technology has been recognized in the Precision Fire Control category reflecting the company’s approach to integrating artificial intelligence (AI), computer vision, and advanced algorithms into compact, scalable fire control systems that address evolving operational challenges for ground forces.
AI-enabled precision enhances small arms accuracy
Hitting moving or distant targets has traditionally relied on a soldier’s skill and experience. SMARTSHOOTER’s SMASH system changes that equation by using real-time image processing and AI-driven tracking. For instance, when troops face fast-moving evasive threats such as small drones (sUAS), SMASH can automatically lock onto the target, calculate ballistic trajectories, and release the shot only when a hit is assured. This improves hit accuracy during intense battle situations and reduces collateral damage.

The technology has proven valuable against aerial threats that are difficult to engage with conventional optics or unaided marksmanship. Field reports from the Israel Defense Forces (IDF) and U.S. military units show that SMASH-equipped rifles have been effective in neutralizing drones that might otherwise evade traditional countermeasures. By transforming standard infantry weapons into precision platforms, SMARTSHOOTER has addressed a critical gap in dismounted force protection.
Modular and scalable solutions for different missions
The SMASH product family is designed to fit a variety of operational needs, designed for seamless integration with existing force structures. The SMASH 2000L and 3000 models mount directly onto standard rifles without adding much weight or bulk, making them practical for soldiers on patrol. For situations where longer range or better situational awareness is needed, the SMASH X4 adds a four-times magnifying optic and a laser rangefinder to its AI targeting features.
SMARTSHOOTER’s SMASH family of fire control systems also includes the SMASH Hopper, a remote weapon station that can be mounted on vehicles, unmanned platforms, or static defensive positions. It connects with external sensors or C4I (Command, Control, Communications, Computers & Intelligence) systems and can be operated via wired or wireless links. This flexibility means units can use SMASH technology in everything from urban patrols to border security while staying connected with larger command networks.

Operational validation and international adoption
SMARTSHOOTER’s technology is deployed across multiple armed forces and its performance demonstrated in live operational environments. The IDF has deployed SMASH systems across all infantry brigades to counter drone and ground threats along sensitive borders and in battle. Similarly, U.S. Army and Marine Corps units have added SMASH to their counter-drone arsenals following rigorous evaluation by organizations such as the Joint Counter-sUAS Office (JCO) and Irregular Warfare Technical Support Directorate (IWTSD).
These deployments are not limited to controlled trials; they reflect ongoing use in active conflict zones where reliability is crucial. Users have reported higher engagement success rates against both aerial and ground targets, even when facing complex threats like drone swarms or armed quadcopters. The awarding of multi-million-dollar contracts by defense agencies further demonstrates confidence in the system’s capabilities.

Beyond Israel and the United States, SMARTSHOOTER’s solutions have been adopted by NATO partners in Europe, including Germany, the UK, and the Netherlands, as well as by security forces in Asia-Pacific. This broad uptake shows that militaries worldwide see value in this approach to modern battlefield challenges.
Human-in-the-Loop targeting supports ethical use of AI
A distinguishing aspect that sets SMARTSHOOTER apart is its focus on keeping humans in control of engagement decisions. While automation helps with tracking and aiming, operators always make the final call. The technology provides visual cues, such as target locks or shot timing indicators, but never fires autonomously.
This approach aligns with evolving international norms regarding responsible use of AI in defense applications. It ensures that accountability remains with trained personnel rather than algorithms alone, a consideration increasingly scrutinized by policymakers and military leaders alike. By embedding these safeguards into its products from inception, SMARTSHOOTER has addressed both operational needs and ethical concerns associated with next-generation fire control systems.

“We are honored to receive this recognition. This achievement reflects the proven value of our SMASH fire control systems and their ability to transform conventional small arms into precision tools against modern threats, including drones. Deployed by leading military forces worldwide, SMASH continues to enhance operational effectiveness at the squad level, and we remain committed to driving innovation that meets the evolving needs of today’s battlefield.”
– Michal Mor, CEO of SMARTSHOOTER
Company Profile
SMARTSHOOTER is a world-class designer, developer, and manufacturer of innovative fire control systems that significantly increase hit accuracy. With a rich record in designing unique solutions for the warfighter, SMARTSHOOTER technology enhances mission effectiveness through the ability to accurately engage and eliminate ground, aerial, static, or moving targets, including drones, during both day and night operations.
Designed to help military and law enforcement professionals swiftly and accurately neutralize their targets, the company’s combat-proven SMASH Family of Fire Control Systems increases assault rifle lethality while keeping friendly forces safe and reducing collateral damage. The company’s experienced engineers combine electro-optics, computer vision technologies, real-time embedded software, ergonomics, and system engineering to provide cost-effective, easy-to-use solutions for modern conflicts.
Fielded and operational by forces in the US, UK, Israel, NATO countries, and others, SMARTSHOOTER’s SMASH family of solutions provides end-users with a precise hit capability across multiple mission areas, creating a significant advantage for the infantry soldier and ultimately revolutionizing the world of small arms and optics.
SMARTSHOOTER’s headquarters are based in Yagur, Israel. The company has subsidiary companies in Europe, the US, and Australia.
Contact Details
E-mail: info@smart-shooter.com
Links
Website: www.SMART-SHOOTER.com
Tools & Platforms
Augustana University announces AI expert, bestselling author as Critical Inquiry & Citizenship Colloquium speaker

Sept. 16, 2025
This piece is sponsored by Augustana University.
Augustana University’s third annual Critical Inquiry & Citizenship Colloquium will culminate with Dr. Joy Buolamwini as the featured speaker.
Buolamwini, bestselling author, MIT researcher and founder of the Algorithmic Justice League, will give a keynote presentation to the Augustana community, alumni and friends at 4 p.m. Oct. 25 in the Elmen Center, with a book signing to follow.
Generously supported by Rosemarie and Dean Buntrock and in partnership with Augustana’s Center for Western Studies, the Critical Inquiry & Citizenship Colloquium was established in 2023. The colloquium is designed to promote civil discourse and deep reflection with the goal of enhancing students’ skills to think critically and communicate persuasively as citizens of a pluralistic society.
“In an era of unprecedented technological advancement, Dr. Buolamwini’s insights urge us to consider not only the capabilities of artificial intelligence but its ethical implications. Her participation in this year’s colloquium invites meaningful dialogue around integrity, responsibility and the human experience,” Augustana President Stephanie Herseth Sandlin said.
In addition to being a researcher, model and artist, Buolamwini is the author of the U.S. bestseller “Unmasking AI: My Mission To Protect What Is Human in a World of Machines.”
Buolamwini’s research on facial recognition technologies transformed the field of AI auditing. She advises world leaders on preventing AI harm and lends her expertise to congressional hearings and government agencies seeking to enact equitable and accountable AI policy.
Buolamwini’s TEDx Talk on algorithmic bias has almost 1.9 million views, and her TED AI Talk on protecting human rights in an age of AI transforms the boundaries of TED Talks.
As the “Poet of Code,” she also creates art to illuminate the impact of AI on society, with her work featured in publications such as Time, The New York Times, Harvard Business Review, Rolling Stone and The Atlantic. Her work as a spokesmodel also has been featured in Vogue, Allure, Harper’s Bazaar and People. She is the protagonist of the Emmy-nominated documentary “Coded Bias.”
Buolamwini is the recipient of notable awards, including the Rhodes Scholarship, Fulbright Fellowship, Morals & Machines Prize, as well as the Technological Innovation Award from the King Center. She was selected as a 2022 Young Global Leader, one of the world’s most promising leaders younger than 40 as determined by The World Economic Forum, and Fortune named her the “conscience of the AI revolution.”
“Many associate AI with advancement and intrigue. Dr. Buolamwini invites cognitive dissonance by demonstrating the potentials for harm caused by the unexamined use of AI,” said Dr. Shannon Proksch, assistant professor of psychology and neuroscience at Augustana.
“Dr. Buolamwini’s visit will invite the Augustana community to engage in critical thinking and deep reflection around how algorithmic technology intersects with our lives and society as a whole. Her work embodies the goals of the Critical Inquiry & Citizenship Colloquium by challenging us to acknowledge the human impact of AI and remain vigilant about the role that we play in ensuring that these technologies do more to benefit and strengthen our communities than to harm them.”
“AI is the most powerful and disruptive technology of our time, so we’re very excited to bring Dr. Buolamwini to Sioux Falls. She’s an engaging and dynamic speaker whose research and life experience have given her deep insight into how we can ensure that AI is used to promote the flourishing of all,” said Dr. Stephen Minister, Stanley L. Olsen Chair of Moral Values and professor of philosophy at Augustana.
Tickets for the 2025 Critical Inquiry & Citizenship Colloquium are free and available to the public at augie.edu/CICCTickets.
About the Critical Inquiry & Citizenship Colloquium
In partnership with the Center for Western Studies and supported by Rosemarie and Dean Buntrock, this annual one- or two-day colloquium is intended to feature faculty scholars and students, as well as industry, research and policy experts who inspire and facilitate critical thinking, persuasive reasoning and thoughtful discussion around timely and engaging topics in areas ranging from religion, science and politics to history, technology and business. The colloquium kicks off or culminates in a keynote given by thought leaders of national or global prominence.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries