Connect with us

Tools & Platforms

HBCUs in the AI Era

Published

on


In the lead up to today’s artificial intelligence (AI) arms race, few institutions of learning  have matched the vision and innovation of Historically Black Colleges and Universities (HBCUs). 

As Dr. Emmanuel Lalande noted, HBCUs are not late to AI—they are “leaders in using algorithms, neural networks, and digital dashboards to turn historic exclusion into future empowerment.”

AI now leads every conversation across America. But HBCUs didn’t just join in. They have been integrating AI and machine learning into their computer science curriculum for decades.

Five years ago, while others debated the value of virtual reality, HBCUs demonstrated how immersive, intelligent environments could uplift learners, catalyze careers, and build resilient communities.

While much of the world hesitated at the edge of the digital unknown, HBCUs boldly entered the Metaverse, planting their flag with the launch of the HBCU Village in STEM City USA.

In partnership with Career Communications Group (CCG) and its pioneering platform STEM City USA, HBCUs launched the HBCU Village, an immersive environment inside the Educational Discovery Center.

The HBCU Village wasn’t just about presence in the Metaverse. It was a strategic prototype for what inclusive, AI-infused education could look like.

In 2020, when global uncertainty loomed and universities struggled to pivot, for HBCUs—long experienced in doing more with less—this was not disruption. HBCUs were already investing deeply in AI research, workforce development, and cross-sector partnerships—positioning themselves not only as participants in the fourth industrial revolution but as architects of its equity-driven future.

From the Metaverse to Machine Learning, it was a legacy of bold moves.

The HBCU Village in STEM City wasn’t just a virtual campus; it was a radical redesign of how culture, education, and community could intersect in 3D space with digital twin campuses of iconic HBCUs, AI-enhanced learning modules tailored for STEM disciplines, live-streamed mentorships, virtual career fairs, and interactive labs bridging students with industry leaders and federal agencies.

The HBCU Village wasn’t just about presence in the Metaverse. It was a strategic prototype for what inclusive, AI-infused education could look like: Students explored cybersecurity simulations and digital twin urban planning scenarios, faculty held workshops on ethical AI, data justice, and algorithmic accountability, and AI-powered chatbots helped guide students through academic and career navigation—years before mainstream higher education adopted similar tools. This fusion of AI, immersive media, and culturally rooted design has now influenced digital equity frameworks across industries and agencies.

The HBCU Village was powered by the same AI engines now shaping national defense, healthcare, and workforce readiness.

HBCUs didn’t wait for permission. They didn’t ask for validation. They simply built. Leading voices such as Dr. James DeBardelaben, who has decades of experience providing solutions to the defense and intelligence communities, have proposed initiatives such as an HBCU-based AI Centers of Excellence, AI-focused national security pipelines with clearance sponsorship, and AI-driven, project-based curricula aligned with defense and public sector needs.

AI must reflect the diversity of experience, said Dr. DeBardelaben. And there’s no better place to shape that future than the halls of our HBCUs.

As the U.S. accelerates its AI capabilities, the untapped potential of HBCUs is being recognized: HBCUs produce a critical mass of U.S.-citizen STEM graduates—ideal for AI jobs that require national security clearance. Deans and industry leaders now advocate for government-funded clearance programs, starting as early as sophomore year. The Department of Defense and National Science Foundation have already increased partnerships for AI innovation in cybersecurity, logistics, and predictive analytics.

Now, as the world races to keep up with the demands of the AI age, the rest of the nation would do well to ask: What can we learn from HBCUs? 

HBCU-Driven AI Highlights you should know include Morgan State University’s $9 million investment to scale AI and machine learning research, Alabama A&M University’s Laboratory for Deep Learning, backed by a $480,000 grant from the Army Research Office, Southern University and A&M College collaboration with IBM to develop AI solutions addressing public safety, air quality, and transportation challenges., and Norfolk State University hosting national forums such as the Research and Innovation Symposium (RISE) with Brookings Institution, addressing AI’s role in equity and national strategy.

Looking ahead to 2030, these are the imperatives for building on the momentum:

  • Expand access to AI and immersive learning from K–12 through PhD levels
  • Fund and federate AI Centers of Excellence across HBCU campuses
  • Codify Metaverse-based learning as part of a federal workforce strategy
  • Build policy frameworks for ethical, equitable AI development—grounded in HBCU research

For interviews, data requests, or partnership inquiries related to this article or ongoing AI/Metaverse initiatives at HBCUs, please reach out to:

  • Tyrone D. Taborn – Publisher, USBE Magazine 📧 ttaborn at cgmag.com
  • Dr. James DeBardelaben – AI and National Security Advisor 📧 jdebardelaben at hbcuvanguard.org
  • Dr. Robin N. Coger – Provost, East Carolina University (Former Dean, N.C. A&T Engineering) 📧 rcoger at ecsu.edu
  • Shawna Stepp-Jones – Founder, Divaneering Foundation 📧 shawna at divaneering.org

 Further Reading & Resources:





Source link

Tools & Platforms

AI algorithms can detect vision problems years before they actually appear, says ZEISS India

Published

on


Artificial intelligence (AI) algorithms and other deep-technologies can help detect vision problems years before even any traces of their symptoms appear and therefore future of eye care and maintaining good eyesight would significantly rely on predictive and preventive innovations driven by robotics, GenAI and deep-tech, said ZEISS India, a subsidiary Carl Zeiss AG, the German optics, opto-electronics, and medical technology company.

Traditionally, eye scans relied heavily on human analysis and significant efforts required to analyse huge volumes of data. “However, AI proposes to aid clinical community with its ability to analyse huge volumes of data with high accuracy and helps detect anomalies at early stages of disease onset and thereby solving one of the biggest challenges in eye care, late detention, seen in emerging economies, including India,” Dipu Bose, Head, Medical Technology, ZEISS India and Neighbouring Markets told The Hindu.

For example, he said, conditions like diabetic retinopathy, glaucoma, or macular degeneration often begin with subtle changes in the retina. AI would be able to catch early indicators (even traces of these) years before the patients become aware of having any symptoms and take timely action to prevent irreversible blindness.

According to Mr. Bose, AI, as a well-trained partner, would be able to analyse thousands of eye images in seconds, with high degree of accuracy. It learns patterns by analysing massive datasets of eye scans and medical records, and it becomes smart enough to spot the tiniest changes/things that the human eye might miss.

Future innovation would rely significantly on predictive and preventive innovations for eye care, where technology would play an essential role in formulating solutions that would allow for earlier detection, more accurate diagnoses, and tailored treatments, he forcast adding Indian eyecare professionals were increasingly adopting new age technologies to ensure better patient outcomes. As a result, AI, Gen AI, robotics and deeptech were causing a significant shift in clinical outcomes, he observed.

“This is precisely why we call it preventive blindness. In India, this is becoming increasingly relevant as the majority of the population do not go for regular eye check-ups and they visit an eye doctor only when their vision is already affected,” Mr. Bose said.

Early intervention would lead to better outcomes: reduce inefficiencies and reduced healthcare costs, he said. “ZEISS contributes to this by advancing medical technologies for diagnosis, surgical interventions, and visualization, ultimately improving patient outcomes and quality of life,” he claimed.

For instance, ZEISS Surgery Optimiser App, an AI-powered tool that allows young surgeons to learn from uploaded and segmented surgery videos of experienced cataract surgeons. Similarly, in diagnostics, ZEISS is also leveraging AI through the Pathfinder solution, an integrated deep learning and AI-based support tool. These technologies can support eye care professionals in making data-driven decisions by visualising and analysing clinical workflows. They leverage real-time surgical data to help young clinicians identify variations, optimise surgical steps, and improve procedural consistency.

“These insight-driven technologies are expected to help bridge experience gaps, improve surgical confidence, and ultimately enhance patient outcomes across the country,” Mr. Bose anticipated.

However, he added, tackling unmet needs and ensuring early diagnosis of diseases would require a fundamental shift: from reactive care to proactive and precision-driven eye-care. “This means leveraging technology not just to treat but to predict, prevent, and personalise patient care before even the symptoms of the disease show up,” he further said.

The eye-tech market is growing in India. The ophthalmic devices market was $943.8 million in 2024 and is expected to reach $1.54 billion by 2033, growing at 5.23% CAGR. The global eye-tech market was valued at approximately $74.67 billion in 2024 and is projected to reach $110.33 billion by 2030 at a CAGR of 6.9%.

Published – September 06, 2025 11:21 am IST



Source link

Continue Reading

Tools & Platforms

AI and cybersecurity: India’s chance to set a responsible global digital standard

Published

on


India’s digital economy is experiencing extraordinary growth, driven by government initiatives, private enterprise, and widespread technological adoption across users from diverse socio-economic backgrounds. Artificial intelligence (AI) is now woven into the fabric of organisational operations, shaping customer interactions, streamlining product development, and enhancing overall agility. Yet, as digitisation accelerates, the nation’s cyber risk landscape is also expanding—fuelled by the very AI innovations that are transforming business.

In a rapidly evolving threat landscape, human error remains a persistent vulnerability. A recent cybersecurity survey revealed that 65% of enterprises worldwide now consider AI-powered email phishing the most urgent risk they face. India’s rapidly growing digital user base and surging data volumes create an environment for increased risks.

Yet, there’s a strong opportunity for India to leverage its unique technical strengths to lead global conversations on secure, ethical, and inclusive digital innovation. By championing responsible AI and cybersecurity, the country can establish itself not only as a global leader but also as a trusted hub for safe digital solutions.

The case for a risk-aware, innovation-led approach

While AI is strengthening security measures with rapid anomaly detection, automated responses, and cost-efficient scalability, these same advancements are also enabling attackers to move faster and deploy increasingly sophisticated techniques to evade defences. The survey shows that 31% of organisations that experienced a breach faced another within three years, underscoring the need for ongoing, data-driven vigilance.

Globally, regulators are deliberating on ensuring greater AI accountability, frameworks with tiered risk assessments, data traceability, and demands for transparent decision-making, as seen in the EU AI Act, the National Institute of Standards and Technology’s AI Risk Management Framework in the US, and the Ministry of Electronics and Information Technology’s AI governance guidelines in India.

India’s digital policy regime is evolving with the enactment of the Digital Personal Data Protection Act and other reforms. Its globally renowned IT services sector, increasing cloud adoption, and digital solutions at population scale are use cases for nations to leapfrog in their digital transformation journey. However, there is a continued need for collaboration for consistent standards, regulatory frameworks, and legislation. This approach can empower Indian developers as they build innovative and compliant solutions with the agility to serve Indian and global markets.

Smart AI security: growing fast, staying steady

The survey highlights that more than 90% of surveyed enterprises are actively adopting secure AI solutions, underscoring the high value organisations place on AI-driven threat detection. As Indian companies expand their digital capabilities with significant investments, security operations are expected to scale efficiently. Here, AI emerges as an essential ally, streamlining security centres’ operations, accelerating response time, and continuously monitoring hybrid cloud environments for unusual patterns in real time.

Boardroom alignment and cross-sector collaboration

One encouraging trend is the increasing involvement of executive leadership in cybersecurity. More boards are forming dedicated cyber-risk subcommittees and embedding risk discussions into broader strategic conversations. In India too, this shift is gaining momentum as regulatory expectations rise and digital maturity improves.

With the lines between IT, business, and compliance blurring, collaborative governance is becoming essential. The report states that 58% of organisations view AI implementation as a shared responsibility between executive leadership, privacy, compliance, and technology teams. This model, if institutionalised across Indian industry, could ensure AI and cybersecurity decisions are inclusive, ethical, and transparent.

Moreover, public-private partnerships — especially in areas like cyber awareness, standards development, and response coordination — can play a pivotal role. The Indian Computer Emergency Response Team (CERT-In), a national nodal agency with the mission to enhance India’s cybersecurity resilience by providing proactive threat intelligence, incident response, and public awareness, has already established itself as a reliable incident response authority.

A global opportunity for India

In many ways, the current moment represents a calling to create the conditions and the infrastructure to lead securely in the digital era. By leveraging its vast resource of engineering talent, proven capabilities in scalable digital infrastructure, and a culture of economical innovation, India can not only safeguard its own digital future but also help shape global norms for ethical AI deployment. This is India’s moment to lead — not just in technology, but in trust.

This article is authored by Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Limited.

Disclaimer: The views expressed in this article are those of the author/authors and do not necessarily reflect the views of ET Edge Insights, its management, or its members



Source link

Continue Reading

Tools & Platforms

Nvidia says GAIN AI Act would restrict competition, likens it to AI Diffusion Rule

Published

on


If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps [File]
| Photo Credit: REUTERS

Nvidia said on Friday the AI GAIN Act would restrict global competition for advanced chips, with similar effects on the U.S. leadership and economy as the AI Diffusion Rule, which put limits on the computing power countries could have.

Short for Guaranteeing Access and Innovation for National Artificial Intelligence Act, the GAIN AI Act was introduced as part of the National Defense Authorization Act and stipulates that AI chipmakers prioritize domestic orders for advanced processors before supplying them to foreign customers.

“We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.

If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps.

“It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.

The rules mirror some conditions under former U.S. President Joe Biden’s AI diffusion rule, which allocated certain levels of computing power to allies and other countries.

The AI Diffusion Rule and AI GAIN Act are attempts by Washington to prioritise American needs, ensuring domestic firms gain access to advanced chips while limiting China’s ability to obtain high-end tech amid fears that the country would use AI capabilities to supercharge its military.

Last month, U.S. President Donald Trump made an unprecedented deal with Nvidia to give the government a cut of its sales in exchange for resuming exports of banned AI chips to China.



Source link

Continue Reading

Trending