Connect with us

AI Insights

KentuckianaWorks addresses concerns about jobs and AI

Published

on


LOUISVILLE, Ky. — While tech CEOs have made claims about the potential artificial intelligence has to wipe out parts of the workforce, Sarah Ehresman, director of labor market intelligence for KentuckianaWorks, said she thinks those concerns are overblown.


What You Need To Know

  • Generative AI has been used more and more in recent years to help workers in their professional life
  • When it comes to jobs, Sarah Ehresman of KentuckianaWorks said many still need a human element, as AI is imperfect
  • Data shows around one-third of Jefferson County’s workforce could see half or more of their tasks affected by AI


“We don’t have to fear this apocalypse of everyone losing their jobs,” Ehresman said. “It should not be something that we totally run away from.”

Generative AI has been used more and more in recent years to help workers in their professional life, with many hoping to improve their speed and efficiency. 

Ehresman said she also uses AI in her daily work life to write, edit and even code. She’s able to complete a task with the help of AI within seconds. 

“I mean, something like this could potentially take you a whole day to figure out, but still, definitely not two minutes,” she said. “I don’t have to spend much time doing it. But I am able to review the code and make sure it’s accurate and that I’m getting the results that I expect.”

As for fears of being replaced by technology when it comes to some jobs, Ehresman said a human element is still necessary because AI is imperfect. 

“You know, artificial intelligence is known to hallucinate, produce bad results; it’s not perfect,” she said. “That’s where the human capabilities still matter a lot, to make sure that the results are what you would expect it to be.”

Whether people fear it or rely on it, Ehresman said AI is here to stay and should be embraced.     

“The best thing that workers can do at this point is really figure out how to work with the technology, not run away from it because they fear that it might replace them, but figure out how to use it in an effective way to make them more productive,” Ehresman said.

According to Brookings data, it is estimated that approximately 34% of Jefferson County’s workers could see half or more of their tasks affected by the use of artificial intelligence, which is a lower rate compared to coastal tech hubs.



Source link

AI Insights

As Zuck Races to Build Godlike AI, Women and People of Color Aren’t Invited

Published

on


Mark Zuckerberg has a new mission: build artificial general intelligence (AGI), a form of AI that can reason and learn like a human.To do that, he’s assembled an elite team of researchers, engineers, and AI veterans from OpenAI, Google, Anthropic, Apple, and more. This new unit, called Meta Superintelligence Labs (MSL), is tasked with building the most powerful artificial intelligence the world has ever seen.

The tech world is calling it a “dream team.” But it’s hard not to notice what’s missing: diversity.

Of the 18 names confirmed so far by Zuckerberg in a memo and by media reports, just one is a woman. There are no Black or Latino researchers on the list. Most of the team members are men who attended elite schools and worked at top Silicon Valley firms. Many are of Asian descent—a reflection of the strong presence of Asian talent in global tech—but the group lacks a wide range of backgrounds and lived experiences.

Here’s a partial list of the new hires:

Alexandr Wang (CEO and chief AI officer)
Nat Friedman (co-lead, former GitHub CEO)
Trapit Bansal
Shuchao Bi
Huiwen Chang
Ji Lin
Joel Pobar
Jack Rae
Johan Schalkwyk
Pei Sun
Jiahui Yu
Shengjia Zhao
Ruoming Pang
Daniel Gross
Lucas Beyer
Alexander Kolesnikov
Xiaohua Zhai
Ren Hongyu.

They’re brilliant. That’s not in question. But they’re also cut from a similar cloth: same institutions, same networks, same worldview. And that’s a serious problem when you’re building something as powerful as superintelligence.

What is superintelligence?

Superintelligence is an AI system that surpasses the smartest humans in reasoning, problem-solving, creativity, and even emotional intelligence. It could write code better than the best engineers, analyze laws better than top lawyers, and manage companies more efficiently than seasoned CEOs.

In theory, a superintelligent AI could revolutionize medicine, solve climate change, or eliminate traffic forever. But it could also upend job markets, deepen surveillance, widen social inequality, or automate harmful biases, especially if it reflects only the perspective of those who built it.

This is why who’s in the room matters. Because the people designing these systems are deciding whose values, assumptions, and life experiences get embedded in the algorithms that may one day run large parts of society.

Whose intelligence is being built?

AI reflects designers. History has already shown us what happens when diversity is ignored. From facial recognition systems that fail on darker skin tones to chatbots that spit out racist, sexist, or ableist content, the risks are not hypothetical.

AI built by homogenous teams tends to replicate the blind spots of its creators. It’s a product flaw. And when the goal is to build something smarter than humanity, those flaws scale.

It’s like programming a god. If you’re going to do that, you better be damn sure it understands all of humanity, not just a narrow sliver of it.

Zuckerberg has said little about the composition of his AI team. In today’s political climate, where “diversity” is often dismissed as a distraction or “wokeness,” few leaders want to talk about it. But silence has a cost. And in this case, the cost could be an intelligence system that doesn’t see or serve the majority of people.

A warning wrapped in progress

Meta says it is building AI for everyone. But its staffing choices suggest otherwise. With no Black or Latino team members and just one woman among nearly 20 hires, the company is sending a message—intentional or not—that the future is being designed by a select few, for a select few.

Then the problem becomes: can we trust this technology? It’s important to make sure that when we hand over key decisions to machines, those machines understand the full range of human experience.

If we don’t fix the diversity gap in AI now, we might bake inequality into the very operating system of the future.

 



Source link

Continue Reading

AI Insights

Artificial Intelligence Is the Future of Wellness

Published

on


Would you turn over your wellness to Artificial Intelligence? Before you balk, hear me out. What if your watch could not only detect diseases and health issues before they arise but also communicate directly with our doctors to flag us for treatment? What if it could speak with the rest of your gadgets in real time, and optimize your environment so your bedroom was primed for your most restful sleep, keep your refrigerator full with the food your body actually needs and your home fitness equipment calibrated to give you the most effective workout for your energy level? What if, with the help of AI, your entire living environment could be so streamlined that you were immersed in the exact kind of wellness your body and mind needed at any given moment, without ever lifting a finger?

It sounds like science fiction, but those days may not be that far off. At least, not if Samsung has anything to do with it. Right now, the electronics company is investing heavily in its wearables sector to ensure that Samsung is at the forefront of the intersection of health and technology. And in 2025, that means a hefty dose of AI.

Wearable wellness technology like watches, rings and fitness tracking bands are not new. In fact, you’d be hard pressed to find someone who doesn’t wear some sort of smart tracker today. But the thing that I’ve always found frustrating about wearable trackers is the data. Sure, you can see how many steps you’re taking, how many calories you’re eating, how restful your sleep is and sometimes even more specific metrics like your blood oxygen or glucose levels, but the real question remains: what should you do with all that data once you have it? What happens when you get a low score or a red alert? Without adequate knowledge of what these metrics actually mean and how they are really affecting your body, how can you know how to make a meaningful change that will actually improve your health? At best, they become a window into your body. At worst, they become a portal to anxiety and fixation, which many experts are now warning can lead to orthorexia, an unhealthy obsession with being healthy.

(Image credit: Samsung)

The Samsung Health app, when paired with the brand’s Galaxy watches, rings, and bands, tracks a staggering amount of metrics from your heart rate to biological age. Forthcoming updates will include even more, including the ability to measure carotenoids in your skin as a way to assess your body’s antioxidant content. But Samsung also understands that what you do with the data is just as important as having it, which is why they’ve introduced an innovative AI-supported coaching program.



Source link

Continue Reading

AI Insights

Pope Leo XIV says artificial intelligence must have ethical management in message to the “AI for Good Summit 2025”

Published

on


A man demonstrates robotic hands picking up a cup as seen in this photo take July 8, 2025, at the AI for Good Summit 2025 in Geneva. The July 8-11 summit, organized by the International Telecommunication Union in partnership with some 40 U.N. agencies and the Swiss government, focused on “identifying innovative AI applications, building skills and standards, and advancing partnerships to solve global challenges,” according to the event’s website. (CNS photo/courtesy ITU/Rowan Farrell)

VATICAN CITY — Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence “requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency,” Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope’s behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10.

The summit, organized by the International Telecommunication Union in partnership with some 40 U.N. agencies and the Swiss government, focused on “identifying innovative AI applications, building skills and standards, and advancing partnerships to solve global challenges,” according to the event’s website.

“Humanity is at a crossroads, facing the immense potential generated by the digital revolution driven by Artificial Intelligence,” Cardinal Parolin wrote on behalf of the pope.

“Although responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them, those who use them also share in this responsibility,” he wrote.

“On behalf of Pope Leo XIV, I would like to take this opportunity to encourage you to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person,” Cardinal Parolin wrote.

A woman in a wheelchair reaches out to Mirokaï, a new generation of robots that employs Artificial Intelligence, as seen in this photo take July 8, 2025, at the AI for Good Summit 2025 in Geneva. The July 8-11 summit, organized by the International Telecommunication Union in partnership with some 40 U.N. agencies and the Swiss government, focused on “identifying innovative AI applications, building skills and standards, and advancing partnerships to solve global challenges,” according to the event’s website. (CNS photo/courtesy ITU/Rowan Farrell)

“This epochal transformation requires responsibility and discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity, and ensuring it serves the interests of humanity as a whole,” he wrote.

When it comes to AI’s increasing capacity to adapt “autonomously,” the message said, “it is crucial to consider the anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values.”

“While AI can simulate aspects of human reasoning and perform specific tasks with incredible speed and efficiency, it cannot replicate moral discernment or the ability to form genuine relationships,” the papal message said. “Therefore, the development of such technological advancements must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience and growth in human responsibility.”

Cardinal Parolin congratulated and thanked the members and staff of the International Telecommunication Union, which was celebrating the 160th anniversary, “for their work and constant efforts to foster global cooperation in order to bring the benefits of communication technologies to the people across the globe.”

“Connecting the human family through telegraph, radio, telephone, digital and space communications presents challenges, particularly in rural and low-income areas, where approximately 2.6 billion persons still lack access to communication technologies,” he wrote.

“We must never lose sight of the common goal” of contributing to what St. Augustine called “the tranquility of order,” and fostering “a more humane order of social relations, and peaceful and just societies in the service of integral human development and the good of the human family,” the cardinal wrote.



Source link

Continue Reading

Trending