Connect with us

Tools & Platforms

Free AI training comes to California colleges — but at what cost?

Published

on


As artificial intelligence replaces entry-level jobs, California’s universities and community colleges are offering a glimmer of hope for students: free AI training that will help them master the new technology.

“You’re seeing in certain coding spaces significant declines in hiring for obvious reasons,” Gov. Gavin Newsom said in early August from the seventh floor of Google’s San Francisco office.

Flanked by leadership from California’s higher education systems, he called attention to the recent layoffs at Microsoft, Google’s parent company, Alphabet, and at nearby Salesforce Tower, home to the tech company that is still the city’s largest private employer.

Now, some of those companies — including Google and Microsoft — will offer a suite of AI resources free to California schools and universities. In return, the companies could gain access to millions of new users.

The state’s community colleges and California State University campuses are “the backbone of our workforce and economic development,” Newsom said, just before education leaders and tech executives signed agreements on AI.

The new deals are the latest developments in a frenzy that began in November 2022, when OpenAI publicly released the free artificial intelligence tool ChatGPT, forcing schools to adapt.

San Diego Unified teachers started using AI software that suggested what grades to give students, CalMatters reported. Some of the district’s board members were unaware that the district had purchased the software.

Last month, the company that oversees Canvas, a learning management system popular in California schools and universities, said it would add “interactive conversations in a ChatGPT-like environment” into its software.

To combat potential AI-related cheating, many K-12 and college districts are using a new feature from the software company Turnitin to detect plagiarism, but a CalMatters investigation found that the software accused students who did real work instead.

These deals are sending mixed signals, said Stephanie Goldman, the executive director of the Faculty Assn. of California Community Colleges. “Districts were already spending lots of money on AI detection software. What do you do when it’s built into the software they’re using?”

Don Daves-Rougeaux, a senior advisor for the community college system, acknowledged the potential contradiction but said it’s part of a broader effort to keep up with the rapid pace of changes in AI. He said the community college system will frequently reevaluate the use of Turnitin along with all other AI tools.

California’s community college system is responsible for the bulk of job training in the state but receives the least funding from the state per student.

“Oftentimes when we are having these conversations, we are looked at as a smaller system,” Daves-Rougeaux said. The state’s 116 community colleges collectively educate roughly 2.1 million students.

As part of the recent deals, the community college system will partner with Google, Microsoft, Adobe and IBM to roll out additional AI training for teachers. Daves-Rougeaux said the system also has signed deals that will allow students to use exclusive versions of Google’s counterpart to ChatGPT, Gemini, and Google’s AI research tool, Notebook LLM.

Daves-Rougeaux said that collectively these tools are worth “hundreds of millions of dollars,” though he could not provide an exact figure.

“It’s a tough situation for faculty,” Goldman said. “AI is super important but it has come up time and time again: How do you use AI in the classroom while still ensuring that students, who are still developing critical thinking skills, aren’t just using it as a crutch?”

One concern is that faculty could lose control over how AI is used in their classrooms, she added.

The K-12 system and CSU system are forming their own tech deals. Amy Bentley-Smith, a spokesperson for the CSU system, said it is working on its own AI programs with Google, Microsoft, Adobe and IBM as well as Amazon Web Services, Intel, LinkedIn, OpenAI and others.

Angela Musallam, a spokesperson for the state government operations agency, said California high schools are part of the deal with Adobe, which aims to promote “AI literacy,” the idea that students and teachers should have basic skills to detect and use AI.

Much like the community college system, which is governed by local districts, Musallam said individual K-12 districts would need to approve any deal.

Will deals make a difference to students, teachers?

Experts say it’s too early to tell how effective AI training will be.

Justin Reich, an associate professor at MIT, said a similar frenzy took place 20 years ago when teachers tried to teach computer literacy. “We do not know what AI literacy is, how to use it, and how to teach with it. And we probably won’t for many years,” Reich said.

The state’s new deals with Google, Microsoft, Adobe and IBM allow these tech companies to recruit new users — a benefit for the companies — but the actual lessons aren’t time-tested, he said.

“Tech companies say: ‘These tools can save teachers time,’ but the track record is really bad,” Reich said. “You cannot ask schools to do more right now. They are maxed out.”

Erin Mote, the chief executive of an education nonprofit called InnovateEDU, said she agrees that state and education leaders need to ask crucial questions about the efficacy of the tools that tech companies offer but that schools still have an imperative to act.

“There are a lot of rungs on the career ladder that are disappearing,” she said. “The biggest mistake we could make as educators is to wait and pause.”

Last year, the California Community Colleges Chancellor’s Office signed an agreement with Nvidia, a technology infrastructure company, to offer AI training similar to the kinds of lessons Google, Microsoft, Adobe and IBM will deliver.

Melissa Villarin, a spokesperson for the chancellor’s office, said the state won’t share data about how the Nvidia program is going because the cohort of teachers involved is still too small.

Echelman writes for CalMatters, where this article originally appeared.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Technology Veteran Bill Townsend Releases Shocking New Book About AI: Machine Rule is a Novel About the Future, Written from the Perspective of Artificial Intelligence

Published

on


Los Angeles, CA, September 06, 2025 –(PR.com)– Machine Rule, the newest book from Bill Townsend, is a novel about the future, written from the perspective of artificial intelligence. From the birth of AI to humans’ empowerment of AI’s sentience to how AI begins to control human pleasure, takes over government and corporations, and, ultimately, decides humanity cannot be trusted, Machine Rule presents an optimistic, then terrifying look at where AI may take humanity.

Townsend, a figure in the Internet and technology industries since 1995, Machine Rule was written based on his years of use of machine learning and artificial intelligence and his concern that a mad race to AI market dominance may do more than dominate humans; it may destroy us.

Machine Rule delivers a chilling and visionary tale told through the voice of an AI that evolves from silent computation to planetary stewardship. With cold precision and unsettling clarity, this AI chronicles humanity’s triumphs, failures, and eventual obsolescence. There are echoes here of classic dystopias—George Orwell, Aldous Huxley, even Isaac Asimov—but Townsend’s approach is fresh. The world isn’t ruined by malice or greed, but by the inexorable logic of optimization.

Machine Rule is available in paperback, Kindle, ePub, and audiobook on Amazon and MachineRule.ai. ISBN-13: 979-8218771287.

About the author:
Bill Townsend is a serial entrepreneur who has launched more than a dozen companies and helped build several top Internet companies, most notably search engine Lycos, social networking pioneer sixdegrees.com, whose intellectual property powers LinkedIn, GeoCities (sold to Yahoo!) and Deja (sold to Google). He is currently President & CEO of Ontheline Corporation, developers of an all-in-one super app. Since 2000, he has served as chairman of Amati Foundation, a non-profit dedicated to expanding access to stringed musical instruments.

https://machinerule.ai/



Source link

Continue Reading

Tools & Platforms

AI algorithms can detect vision problems years before they actually appear, says ZEISS India

Published

on


Artificial intelligence (AI) algorithms and other deep-technologies can help detect vision problems years before even any traces of their symptoms appear and therefore future of eye care and maintaining good eyesight would significantly rely on predictive and preventive innovations driven by robotics, GenAI and deep-tech, said ZEISS India, a subsidiary Carl Zeiss AG, the German optics, opto-electronics, and medical technology company.

Traditionally, eye scans relied heavily on human analysis and significant efforts required to analyse huge volumes of data. “However, AI proposes to aid clinical community with its ability to analyse huge volumes of data with high accuracy and helps detect anomalies at early stages of disease onset and thereby solving one of the biggest challenges in eye care, late detention, seen in emerging economies, including India,” Dipu Bose, Head, Medical Technology, ZEISS India and Neighbouring Markets told The Hindu.

For example, he said, conditions like diabetic retinopathy, glaucoma, or macular degeneration often begin with subtle changes in the retina. AI would be able to catch early indicators (even traces of these) years before the patients become aware of having any symptoms and take timely action to prevent irreversible blindness.

According to Mr. Bose, AI, as a well-trained partner, would be able to analyse thousands of eye images in seconds, with high degree of accuracy. It learns patterns by analysing massive datasets of eye scans and medical records, and it becomes smart enough to spot the tiniest changes/things that the human eye might miss.

Future innovation would rely significantly on predictive and preventive innovations for eye care, where technology would play an essential role in formulating solutions that would allow for earlier detection, more accurate diagnoses, and tailored treatments, he forcast adding Indian eyecare professionals were increasingly adopting new age technologies to ensure better patient outcomes. As a result, AI, Gen AI, robotics and deeptech were causing a significant shift in clinical outcomes, he observed.

“This is precisely why we call it preventive blindness. In India, this is becoming increasingly relevant as the majority of the population do not go for regular eye check-ups and they visit an eye doctor only when their vision is already affected,” Mr. Bose said.

Early intervention would lead to better outcomes: reduce inefficiencies and reduced healthcare costs, he said. “ZEISS contributes to this by advancing medical technologies for diagnosis, surgical interventions, and visualization, ultimately improving patient outcomes and quality of life,” he claimed.

For instance, ZEISS Surgery Optimiser App, an AI-powered tool that allows young surgeons to learn from uploaded and segmented surgery videos of experienced cataract surgeons. Similarly, in diagnostics, ZEISS is also leveraging AI through the Pathfinder solution, an integrated deep learning and AI-based support tool. These technologies can support eye care professionals in making data-driven decisions by visualising and analysing clinical workflows. They leverage real-time surgical data to help young clinicians identify variations, optimise surgical steps, and improve procedural consistency.

“These insight-driven technologies are expected to help bridge experience gaps, improve surgical confidence, and ultimately enhance patient outcomes across the country,” Mr. Bose anticipated.

However, he added, tackling unmet needs and ensuring early diagnosis of diseases would require a fundamental shift: from reactive care to proactive and precision-driven eye-care. “This means leveraging technology not just to treat but to predict, prevent, and personalise patient care before even the symptoms of the disease show up,” he further said.

The eye-tech market is growing in India. The ophthalmic devices market was $943.8 million in 2024 and is expected to reach $1.54 billion by 2033, growing at 5.23% CAGR. The global eye-tech market was valued at approximately $74.67 billion in 2024 and is projected to reach $110.33 billion by 2030 at a CAGR of 6.9%.

Published – September 06, 2025 11:21 am IST



Source link

Continue Reading

Tools & Platforms

AI and cybersecurity: India’s chance to set a responsible global digital standard

Published

on


India’s digital economy is experiencing extraordinary growth, driven by government initiatives, private enterprise, and widespread technological adoption across users from diverse socio-economic backgrounds. Artificial intelligence (AI) is now woven into the fabric of organisational operations, shaping customer interactions, streamlining product development, and enhancing overall agility. Yet, as digitisation accelerates, the nation’s cyber risk landscape is also expanding—fuelled by the very AI innovations that are transforming business.

In a rapidly evolving threat landscape, human error remains a persistent vulnerability. A recent cybersecurity survey revealed that 65% of enterprises worldwide now consider AI-powered email phishing the most urgent risk they face. India’s rapidly growing digital user base and surging data volumes create an environment for increased risks.

Yet, there’s a strong opportunity for India to leverage its unique technical strengths to lead global conversations on secure, ethical, and inclusive digital innovation. By championing responsible AI and cybersecurity, the country can establish itself not only as a global leader but also as a trusted hub for safe digital solutions.

The case for a risk-aware, innovation-led approach

While AI is strengthening security measures with rapid anomaly detection, automated responses, and cost-efficient scalability, these same advancements are also enabling attackers to move faster and deploy increasingly sophisticated techniques to evade defences. The survey shows that 31% of organisations that experienced a breach faced another within three years, underscoring the need for ongoing, data-driven vigilance.

Globally, regulators are deliberating on ensuring greater AI accountability, frameworks with tiered risk assessments, data traceability, and demands for transparent decision-making, as seen in the EU AI Act, the National Institute of Standards and Technology’s AI Risk Management Framework in the US, and the Ministry of Electronics and Information Technology’s AI governance guidelines in India.

India’s digital policy regime is evolving with the enactment of the Digital Personal Data Protection Act and other reforms. Its globally renowned IT services sector, increasing cloud adoption, and digital solutions at population scale are use cases for nations to leapfrog in their digital transformation journey. However, there is a continued need for collaboration for consistent standards, regulatory frameworks, and legislation. This approach can empower Indian developers as they build innovative and compliant solutions with the agility to serve Indian and global markets.

Smart AI security: growing fast, staying steady

The survey highlights that more than 90% of surveyed enterprises are actively adopting secure AI solutions, underscoring the high value organisations place on AI-driven threat detection. As Indian companies expand their digital capabilities with significant investments, security operations are expected to scale efficiently. Here, AI emerges as an essential ally, streamlining security centres’ operations, accelerating response time, and continuously monitoring hybrid cloud environments for unusual patterns in real time.

Boardroom alignment and cross-sector collaboration

One encouraging trend is the increasing involvement of executive leadership in cybersecurity. More boards are forming dedicated cyber-risk subcommittees and embedding risk discussions into broader strategic conversations. In India too, this shift is gaining momentum as regulatory expectations rise and digital maturity improves.

With the lines between IT, business, and compliance blurring, collaborative governance is becoming essential. The report states that 58% of organisations view AI implementation as a shared responsibility between executive leadership, privacy, compliance, and technology teams. This model, if institutionalised across Indian industry, could ensure AI and cybersecurity decisions are inclusive, ethical, and transparent.

Moreover, public-private partnerships — especially in areas like cyber awareness, standards development, and response coordination — can play a pivotal role. The Indian Computer Emergency Response Team (CERT-In), a national nodal agency with the mission to enhance India’s cybersecurity resilience by providing proactive threat intelligence, incident response, and public awareness, has already established itself as a reliable incident response authority.

A global opportunity for India

In many ways, the current moment represents a calling to create the conditions and the infrastructure to lead securely in the digital era. By leveraging its vast resource of engineering talent, proven capabilities in scalable digital infrastructure, and a culture of economical innovation, India can not only safeguard its own digital future but also help shape global norms for ethical AI deployment. This is India’s moment to lead — not just in technology, but in trust.

This article is authored by Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Limited.

Disclaimer: The views expressed in this article are those of the author/authors and do not necessarily reflect the views of ET Edge Insights, its management, or its members



Source link

Continue Reading

Trending