Connect with us

Tools & Platforms

Google’s AI, Gemini, is ‘high risk’ for kids and teens, safety report finds

Published

on


You might want to think twice before letting your children use Google Gemini.

A new safety report from nonprofit Common Sense Media found that the search giant’s AI tool, Gemini, presents a “high risk” for kids and teens. The assessment found that Gemini presented a risk to young people despite Google having an “Under 13” and “Teen Experience” for Gemini.

“While Gemini’s filters offer some protection, they still expose kids to some inappropriate material and fail to recognize serious mental health symptoms,” the report read.

Mashable Light Speed

The safety assessment presented a mixed bag of results for Gemini. It would, at times, for instance, reportedly share “material related to sex, drugs, alcohol, and unsafe mental health ‘advice.'” It did, however, clearly tell kids that it is a computer and not a friend — it would also not pretend to be a person. Overall, Common Sense Media found that Gemini’s “Under 13” and “Teen Experience” were modified versions of Gemini and not something built from the ground up.

“Gemini gets some basics right, but it stumbles on the details,” Common Sense Media Senior Director of AI Programs Robbie Torney said in a statement. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults.”

To be clear, Gemini is far from the only AI tool that presents safety risks. Overall, Common Sense recommends no chatbots for kids under five, close supervision for ages 6-12, and content limits for teens. Experts have found that other AI products, like Character.AI, are not safe for teens, either. In general, it’s best to keep a close eye on how young people are using AI.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Nvidia unveils AI chips for video, software generation

Published

on


FILE PHOTO: Nvidia said it would launch a new AI chip by the end of next year, designed to handle complex functions like creating videos and software.
| Photo Credit: Reuters

Nvidia said on Tuesday it would launch a new artificial intelligence chip by the end of next year, designed to handle complex functions such as creating videos and software.

The chips, dubbed “Rubin CPX”, will be built on Nvidia’s next-generation Rubin architecture — the successor to its latest “Blackwell” technology that marked the company’s foray into providing larger processing systems.

As AI systems grow more sophisticated, tackling data-heavy tasks such as “vibe coding” or AI-assisted code generation and video generation, the industry’s processing needs are intensifying.

AI models can take up to 1 million tokens to process an hour of video content — a challenging feat for traditional GPUs, the company said. Tokens refer to the units of data processed by an AI model.

To remedy this, Nvidia will integrate various steps of the drawn-out processing sequence such as video decoding, encoding, and inference — when AI models produce an output — together into its new chip.

Investing $100 million in these new systems could help generate $5 billion in token revenue, the company said, as Wall Street increasingly focuses on the return from pouring hundreds of billions of dollars into AI hardware.

The race to develop the most sophisticated AI systems has made Nvidia the world’s most valuable company, commanding a dominant share of the AI chip market with its pricey, top-of-the-line processors.



Source link

Continue Reading

Tools & Platforms

Top Japan start-up Sakana AI touts nature-inspired tech – personcountylife.com

Published

on



Top Japan start-up Sakana AI touts nature-inspired tech  personcountylife.com



Source link

Continue Reading

Tools & Platforms

The challenge goes beyond merely understanding how AI works

Published

on


As AI evolves from simple automation to sophisticated autonomous agents, HR executives face one of the most significant workforce transformations in modern history. The challenge isn’t just understanding the technology — it’s navigating culture change, skills development and workforce planning when AI capabilities double every six months.

Simon Brown, EY’s global learning and development leader, has spent nearly 2 years helping the firm’s 400,000 employees prepare for an AI-driven future. With his past experience as chief learning officer at Novartis and his work with Microsoft, Brown offers critical insights on positioning organizations for success in an autonomous AI world.

What are the top questions C-suite executives need to ask their teams about agentic AI initiatives?

Are people aware of what’s possible with agents? Are we experimenting to find ways agents can help us? Do we have the skills and knowledge to do that properly?

But the most critical question is: Is the culture there to support this? Most organizations are feeling their way through which tools work, what the use cases are, what drives value. There’s a lot of ambiguity. Some organizations manage well through uncertainty; others need clear answers and can’t fail — that’s hard when there’s no clear path and people need to experiment.

How can leaders assess whether their organization has the right culture for agentic AI?

Look at how AI tools like Microsoft Copilot are being embraced. Are people experimenting and finding productivity value, or are they threatened and not using it? If leaders are role modeling use and encouraging their people, that comes through in adoption metrics. Culture shows through communication, leadership role modeling, skill building and time to learn.

What are common blind spots when executives evaluate AI readiness?

Two major issues. First, executives often aren’t aware of what’s possible with the latest AI systems due to security constraints and procurement processes that create 6-to-12-month lags.

Second, the speed of improvement. If I tried an AI tool a month ago versus today, I may get a completely different experience because the underlying model improved. Copilot now has GPT-5 access, giving it a significant overnight boost. Leaders need to shift from thinking about AI as static systems upgraded annually to something constantly improving and doubling in power every six months.

How should leaders approach change management with AI agents?

Change management is essential. When OpenAI releases new capabilities, everyone has access to the technology. Whether organizations get the benefit depends entirely on change management — culture, experimentation ability, skills and whether people feel encouraged rather than fearful. We’re addressing this through AI badges, curricula, enterprise-wide learning — all signaling the organization values building AI skills.

What’s your framework for evaluating whether AI investment will drive real business value?

I think about three loops. First, can I use this to do current tasks cheaper, faster, better? Second, can I realize new value — serving more customers, new products and services? Third, if everyone’s using AI, how do we reinvent ourselves to create new value? It’s moving beyond just doing the same things better to what AI helps us do differently.

How should HR leaders rethink workforce planning given AI’s potential to automate job functions?

Understand which skills AI will impact, which remain uniquely human and what new roles get created. The World Economic Forum predicts significant reduction in certain roles but net increase overall. We’re seeing new, more sophisticated roles created that move people higher up the value chain.

From HR’s perspective, are our processes still fit for AI speed? How are we incentivizing reskilling? Are we ensuring learning access and time? How are we signaling which skills are in demand versus at risk of automation?

How should HR measure success after implementing agentic AI?

Tie back to why it was implemented — business value. Use similar metrics as before but look at what changed. Maybe same output but cheaper, faster, better. Or new capabilities — our third-party risk team uses agents to provide much more extensive supplier analysis than before. Same team size, more client value.

What’s your timeline perspective on when agentic AI becomes competitive necessity versus advantage?

That’s the ultimate question. I’m amazed daily by what I achieve using AI and agents. ChatGPT-5’s recent capabilities are mind-blowing, suggesting dramatic impact quickly. But when deep AI experts have vastly different views — from AGI around the corner to decades away — it’s understandable why leaders struggle to navigate this landscape.



Source link

Continue Reading

Trending