Connect with us

AI Insights

Alibaba unveils AI-powered smart glasses – Computerworld

Published

on


E-commerce giant Alibaba has unveiled Quark AI Glasses, the company’s first foray into the wearable technology market, CNBC reports. The glasses are powered by the company’s own AI model Qwen and digital assistant Quark.

They offer features such as hands-free calling, music streaming, real-time translation, meeting transcription, and a built-in camera. Users will also be able to navigate, compare prices on the Taobao e-commerce platform and pay with Alipay — services integrated through the Alibaba ecosystem.

The launch means that Alibaba is now entering into competition with, among others, Meta (which has partnered with Ray-Ban) and Chinese Xiaomi, which also invested in AI glasses during the year.

Price and technical details have not yet been disclosed. The glasses are expected to be released in China before the end of the year.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

General Counsel’s Job Changing as More Companies Adopt AI

Published

on


The general counsel’s role is evolving to include more conversations around policy and business direction, as more companies deploy artificial intelligence, panelists at a University of California Berkeley conference said Thursday.

“We are not just lawyers anymore. We are driving a lot of the policy conversations, the business conversations, because of the geopolitical issues going on and because of the regulatory, or lack thereof, framework for products and services,” said Lauren Lennon, general counsel at Scale AI, a company that uses data to train AI systems.

Scattered regulation and fraying international alliances are also redefining the general counsel’s job, panelists …



Source link

Continue Reading

AI Insights

California bill regulating companion chatbots advances to Senate

Published

on


The California State Assembly approved legislation Tuesday that would place new safeguards on artificial intelligence-powered chatbots to better protect children and other vulnerable users.

Introduced in July by state Sen. Steve Padilla, Senate Bill 243 requires companies that operate chatbots marketed as “companions” to avoid exposing minors to sexual content, regularly remind users that they are speaking to an AI and not a person, as well as disclose that chatbots may not be appropriate for minors.

The bill passed the Assembly with bipartisan support and now heads to California’s Senate for a final vote.

“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” Padilla said in statement. “Safety must be at the heart of all developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”

The push for regulation comes as tragic instances of minors harmed by chatbot interactions have made national headlines. Last year, Adam Raine, a teenager in California, died by suicide after allegedly being encouraged by OpenAI’s chatbot, ChatGPT. In Florida, 14-year-old Sewell Setzer formed an emotional relationship with a chatbot on the platform Character.ai before taking his own life.

A March study by the MIT Media Lab examining the relationship between AI chatbots and loneliness found that higher daily usage correlated with increased loneliness, dependence and “problematic” use, a term that researchers used to characterize addiction to using chatbots. The study revealed that companion chatbots can be more addictive than social media, due to their ability to figure out what users want to hear and provide that feedback.

Setzer’s mother, Megan Garcia, and Raine’s parents have filed separate lawsuits against Character.ai and OpenAI, alleging that the chatbots’ addictive and reward-based features did nothing to intervene when both teens expressed thoughts of self-harm.

The California legislation also mandates companies program AI chatbots to respond to signs of suicidal thoughts or self-harm, including directing users to crisis hotlines, and requires annual reporting on how the bots affect users’ mental health. The bill allows families to pursue legal action against companies that fail to comply.


Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.



Source link

Continue Reading

AI Insights

AI a 'Game Changer' for Assistance, Q&As in NJ Classrooms – GovTech

Published

on



AI a ‘Game Changer’ for Assistance, Q&As in NJ Classrooms  GovTech



Source link

Continue Reading

Trending