Connect with us

AI Research

UP, Wits get R35m from Google to drive AI innovation

Published

on


James Manyika, senior vice-president for research, labs, technology and society at Google.

South African institutions of higher learning – the University of Pretoria (UP) and the University of Witwatersrand (Wits) – have received R35 million funding from US-based search giant Google to drive artificial intelligence (AI) research.

This, after Google announced a wave of funding and partnerships to expand AI research, talent development and infrastructure across Africa.

According to the firm, this includes $37 million in cumulative funding, as well as the opening of the AI Community Centre in Accra – a platform for inclusive AI collaboration, learning and innovation.

Funding in South Africa includes $1 million (R17 million) to UP’s African Institute for Data Science and Artificial Intelligence (AfriDSAI).

AfriDSAI is a transdisciplinary institute based at UP that brings together researchers, technologists and communities to reimagine how science and AI can work for Africa.

Another R1 million went to Wits Machine Intelligence and Neural Discovery Institute (MIND) to help establish Africa’s leadership in global AI research.

MIND is an interdisciplinary AI research hub that pushes the frontiers of scientific understanding of machine, human and animal intelligence. It focuses on fundamental AI research that promotes breakthrough scientific discoveries and aims to grow a much-needed critical mass of AI expertise on the continent.

To further empower innovation, Google is also launching a catalytic funding initiative to support AI-driven African start-ups tackling real-world challenges.

It says this platform will combine philanthropic capital, venture investment and Google’s technical expertise to help more than 100 early-stage ventures scale AI-based solutions in agriculture, healthcare, education and other vital sectors.

The start-ups will also receive mentorship, access to tools and technical guidance to support responsible development, it adds.

Speaking about the announcements, James Manyika, senior vice-president for research, labs, technology and society at Google, says: “Africa is home to some of the most important and inspiring work in AI today.

“We are committed to supporting the next wave of innovation through long-term investment, local partnerships and platforms that help researchers and entrepreneurs build solutions that matter.”

Yossi Matias, vice-president of engineering and research at Google, adds: “This new wave of support reflects our belief in the talent, creativity and ingenuity across the continent. By building with local communities and institutions, we’re supporting solutions that are rooted in Africa’s realities and built for global impact.”



Source link

AI Research

Microsoft and OpenAI reach agreement on… something – Computerworld

Published

on


“OpenAI’s decision to recast its for-profit arm as a public benefit corporation while keeping control in the hands of its nonprofit parent is without precedent at this scale,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “It is a structure that fuses two competing logics: the relentless need to raise capital for models that cost billions to train, and the equally strong pressure from regulators, investors, and the public to demonstrate accountability.”

The proposed structure enables OpenAI to attract traditional investors who expect potential returns, while the nonprofit parent can ensure safety considerations aren’t sacrificed for profit. However, Gogia warned the hybrid model is “as much a gamble as it is an innovation,” noting concerns about “who carries liability if something goes wrong, whether fiduciary duty to investors will override social commitments when revenues are threatened.”

Charlie Dai, VP and principal analyst at Forrester, said the structure “could influence others because it balances capital access with mission-driven oversight” but warned that “regulatory scrutiny, lawsuits, and governance complexity introduce uncertainty around decision-making speed and long-term stability.”



Source link

Continue Reading

AI Research

How Artificial Intelligence Is Revolutionizing Workplace Ergonomics and Safety

Published

on


Ergonomic assessments can help organizations create safer work environments. The use of artificial intelligence in this field is poised to create even better results, if managed properly.

Workplace injuries continue to impose significant costs on businesses, with ergonomic-related injuries representing a substantial portion of workers’ compensation claims. As companies seek more efficient ways to protect their workforce, artificial intelligence is emerging as a transformative tool that can accelerate assessments while maintaining the human expertise that makes ergonomic programs effective.

“As an ergonomist, doing job task assessments took a long time. We saw AI as a tool that could help us speed up the back end, specifically the measuring and quantifying risk of awkward postures and forceful exertions,” said Michael White, Managing Director of Ergonomics at The Hartford.

“I think any ergonomist would agree that using these technologies now makes our lives way more efficient to screen for and visualize ergonomics-related exposures, and allows us to focus on what is important: helping customers or organizations improve work environments to make people healthier and less susceptible to injury.”

White’s team has integrated various AI tools into their assessment processes, discovering both significant opportunities and important limitations along the way.

The integration of artificial intelligence into ergonomic assessments varies significantly depending on the work environment. White explained that The Hartford takes different approaches for office versus industrial settings, with each presenting unique opportunities for AI implementation.

“At The Hartford, we’re working on different approaches for office ergonomics versus industrial ergonomics. We’ve focused more on the industrial space since that’s where the higher costs and exposures typically occur. However, for office environments, AI can be particularly effective,” White said.

Office environments offer more controlled conditions that make them ideal candidates for AI-powered assessments. Most office workstations feature adjustable chairs and standardized equipment, creating consistent parameters that AI systems can more easily evaluate.

“Office settings are more controllable compared to factories, warehouses, or distribution centers because most workstations have adjustable chairs and standardized equipment. We’re currently developing an AI-based office assessment tool with one of our vendors that captures images of the workspace,” White said.

This office assessment tool guides users through a series of questions to identify the positioning of keyboards, mice, monitors, and chair height. The AI then analyzes workspace photos to provide specific ergonomic recommendations, such as lowering monitors, raising chair height, or adjusting desk elevation.

“The goal is to create a hands-off approach where AI and machine learning can handle the assessment process independently, which is currently in development for office environments,” White said.

Carriers Crave Efficiency

For insurance companies like The Hartford, AI offers particular efficiency advantages. With hundreds of customers interested in ergonomic services and three to four ergonomists on staff, reaching every client through traditional on-site visits would be impossible.

Michael White, Managing Director of Ergonomics at The Hartford

“Virtual interactions have been a great way to be efficient, allowing us to reach as many people as possible. We can send a link to a customer, who then takes a couple of videos. These are sent back to our AI platform, enabling us to work with customers remotely without being on-site,” White said.

While AI offers significant efficiency gains, White emphasized that human oversight remains essential for effective ergonomic programs. Current AI technology cannot replace the critical thinking and problem-solving capabilities that experienced ergonomists bring to workplace assessments.

“You definitely cannot put 100% trust in AI at this point, and you need a second set of eyes or even a third set of eyes,” White said.

AI performs best in controlled, predictable environments where patterns repeat consistently. White noted that shipping and receiving areas represent ideal applications for AI-powered assessments because they involve repeatable processes that AI systems can learn to evaluate effectively.

“In more controlled environments, such as a shipping receiving area, AI can be useful. Most companies have a loading dock where products are boxed and loaded onto trucks for shipping—a very repeatable process. For scenarios like this, AI could be a valuable tool because it sees the same patterns repeatedly and can generate relevant solutions or recommendations,” White said.

However, in unpredictable or highly variable work environments, human expertise becomes even more critical. Professional ergonomists must review AI outputs and provide on-site or virtual guidance for complex situations that fall outside typical patterns.

Perhaps most importantly, AI cannot replicate the human connection that drives successful ergonomic programs. White emphasized that employee feedback and buy-in remain crucial elements that technology cannot replace.

“A good consultant always considers the person doing the job. In many cases, if I’m on-site, I will physically do the job that a person is doing to understand and relate to them,” White said. “We always seek to ask those folks doing the job for solutions because if you’re doing something day in and day out, you likely have ideas about what would make this easier on your body.”

“That’s where I see a gap with AI that may never be filled—that person-to-person interaction. I think this direct human connection is very critical in what we do, both in getting stakeholder buy-in to make workplace improvements and more simply relating to the average worker’s day-to-day,” White said.

Expanding Safety Impact Beyond Ergonomics

The computer vision technology that powers ergonomic assessments can identify a much broader range of workplace safety issues, making AI a valuable tool for comprehensive safety programs. Companies with existing CCTV systems can leverage this infrastructure to monitor various hazards beyond ergonomic risks.

“The AI tools we’re using, particularly computer vision tools, aren’t just specific to ergonomics; they tie into broader safety programs. If a company has CCTV cameras monitoring their shop floor or warehouse, which most do now, the technology can identify various safety issues beyond ergonomics,” White said.

These systems can detect forklift incidents, slip-trip-falls, and other dangerous behaviors that might otherwise go unnoticed until an accident occurs. White’s team has worked with companies whose AI systems identified employees performing unsafe behaviors such as doing donuts in forklifts, climbing on energized machinery, ducking under prohibited conveyor systems, and coming into contact with hot electrical components.

“There’s a wide range of safety elements that AI can help monitor and improve, with ergonomics being just one piece of that broader safety picture,” White said.

Wearable technology adds another layer of real-time safety coaching through haptic feedback systems. These devices can provide immediate alerts when workers assume potentially harmful postures.

“An interesting benefit we’ve seen with wearables, based on our own anecdotal evidence, is postural improvement through haptic feedback. When someone wears a sensor like a belt clip and bends too far, the device vibrates to alert them they’re bending improperly and should consider lifting differently,” White said.

The coaching capability of these devices creates autonomous safety guidance without requiring constant human supervision. White’s team has documented measurable improvements in lifting behaviors after implementing wearables with haptic feedback features.

“We’ve seen pre- and post-implementation results showing that when the belt clip vibrates, users don’t bend at the waist as far. It’s coaching them without having someone there telling them all the time. This autonomous coaching capability makes the technology particularly valuable,” White said.

The personalization capabilities of AI represent perhaps its greatest long-term potential for workplace safety. Rather than applying one-size-fits-all approaches, AI can identify individual workers who may be at higher risk and provide targeted interventions.

“I think it’s really just allowing more personalization. It goes away from the one-size-fits-all approach, which is still needed,” White said. “Traditional safety elements remain essential—everyone needs their steel toes, safety glasses, height-adjustable chairs and surfaces, and safe lift training. But AI is just harnessing data to help safety professionals make better decisions.”

“This technology might allow you to hone in on a specific worker’s risk profile and provide a coaching opportunity to say, ‘I noticed your assembly station is a little too low. Did you know your desk was adjustable? Let’s raise this up so that you’re not hunching forward and extending your elbows,’” White said.

As AI continues to evolve, White expects increasing adoption across industries, with policies likely emerging to address privacy concerns while maximizing safety benefits. Some of The Hartford’s customers have already purchased AI safety systems for independent use after successful pilots, demonstrating growing confidence in the technology.

“We’ve had some customers who, after piloting these technologies with us, have independently purchased them for in-house use. This gives us better opportunities to collaborate with them and helps us manage our time more effectively. Empowering our customers with tools to better manage risk in-house not only benefits the customer and The Hartford, but society also benefits with a healthier workforce,” White said. &



Source link

Continue Reading

AI Research

Navigating the concerns of AI

Published

on


Ethical concerns

The ethical concerns that many professionals express focus on biased outputs and data privacy. 

Bias. AI generates outputs based on algorithms that human beings develop and on information that human beings provide. But if an AI development team isn’t careful about how it trains its machine-learning protocols, the tool may generate outputs that favor one outcome over another. Those biases can render an AI system unreliable. This is, of course, a significant worry for law and tax professionals, who require access to utterly trustworthy legal and regulatory information.

Hallucinations. AI models that aren’t carefully developed may be vulnerable to hallucinations — outputs that deliver misinformation. This is probably one of the reasons why 50% of report respondents said that a lack of “demonstrable accuracy of AI-powered technologies” was a major barrier to their organization’s investment in these tools.

Data security. To provide reliable outputs, AI systems require access to large amounts of data, including sensitive personal information. But this understandably raises concerns about privacy violations, as security vulnerabilities can expose a company to financial penalties, legal difficulties, and reputational damage. 

Among those surveyed in the report, 42% cited a lack of demonstrable security as a barrier to AI investment in their organizations. Many professionals worry that AI systems might compromise sensitive data and make it publicly available. Professional organizations must ensure that they are conforming to data protection regulations and search for AI tools that prioritize data security.

Strategic concerns

Nearly two-thirds (65%) of respondents who have personal AI goals say they aren’t aware of their organization having an AI strategy. More generally, only 22% have a visible strategy. This disconnect can result in a lack of guidance, causing individual professionals to be inconsistent, inefficient, and even unintentionally unethical in their use of AI.

It’s also worth noting that 38% of professionals working for organizations that do have an AI strategy also reported that they don’t have any personal goals for AI adoption. Lacking objectives increases the risk that the organization won’t effectively implement its AI strategy.

Yet another concern that respondents noted in the report is that AI tools might be so good that organizations might rely too heavily on their use. They fear that this overreliance could hinder professional development, particularly when it comes to building the necessary skills for using AI effectively and ethically going forward. This technology is constantly evolving, after all.

Building AI literacy and professional resilience

Ethical and strategic concerns of AI are reasonable worries. But by addressing those concerns, professionals and the organizations they serve can develop competitive advantages in their market.

Strategic training leads to more successful adoption

Professionals are well aware that rapid technological advancements, evolving business needs, and shifting workforce demographics are constantly changing the way they conduct their practices. Professionals who can adopt AI systems effectively will gain a competitive edge, boosting both their personal impact and their organization’s long-term value.



Source link

Continue Reading

Trending