Connect with us

AI Insights

Do you trust AI? | Live Science

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

How Silicon Valley is using religious language to talk about AI

Published

on


TORONTO — As the rapid, unregulated development of artificial intelligence continues, the language people in Silicon Valley use to describe it is becoming increasingly religious.

From predicting the potential destruction of humanity to a transhumanist apocalypse where people merge with AI, here’s what some of the key players are saying.

___

“I think religion will be in trouble if we create other beings. Once we start creating beings that can think for themselves and do things for themselves, maybe even have bodies if they’re robots, we may start realizing we’re less special than we thought. And the idea that we’re very special and we were made in the image of God, that idea may go out the window.”

— Nobel Prize winner Geoffrey Hinton, often dubbed the “Godfather of AI” for his pioneering work on deep learning and neural networks.

___

“By 2045, which is only 20 years from now, we’ll be a million times more powerful. And we’ll be able to have expertise in every field.”

— author and computer scientist Ray Kurzweil, who believes humans will merge with AI.

___

“There certainly are dimensions of the technology that have become extremely powerful in the last century or two that have an apocalyptic dimension. And perhaps it’s strange not to try to relate it to the biblical tradition.”

— PayPal and Palantir co-founder Peter Thiel speaking to the Hoover Institution at Stanford University.

___

“I feel that the four big AI CEOs in the U.S. are modern-day prophets with four different versions of the Gospel and they’re all telling the same basic story that this is so dangerous and so scary that I have to do it and nobody else.”

— Max Tegmark, a physicist and machine learning researcher at the Massachusetts Institute of Technology.

___

“When people in the tech industry talk about building this one true AI, it’s almost as if they think they’re creating God or something.”

— Meta CEO Mark Zuckerberg on a podcast promoting his company’s own venture into AI.

___

“Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits. But it is a world worth fighting for. If all of this really does happen over 5 to 10 years — the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights — I suspect everyone watching it will be surprised by the effect it has on them.”

— Anthropic CEO Dario Amodei in his essay, “Machines of Loving Grace: How AI Could Transform the World for the Better.”

___

“You and I are living through this once-in-human-history transition where humans go from being the smartest thing on planet Earth to not the smartest thing on planet Earth.”

— OpenAI CEO Sam Altman during an interview for TED Talks.

___

“These really big, scary problems that are complex and challenging to address — it’s so easy to gravitate towards fantastical thinking and wanting a one-size-fits-all global solution. I think it’s the reason that so many people turn to cults and all sorts of really out there beliefs when the future feels scary and uncertain. I think this is not different than that. They just have billions of dollars to actually enact their ideas.”

— Dylan Baker, lead research engineer at the Distributed AI Research Institute.



Source link

Continue Reading

AI Insights

Japanese Nail Salon Attempts Reinvention as Major Bitcoin Holder

Published

on




Convano Inc. was, until recently, a sleepy Tokyo-listed operator of nail salons. Now it wants to become one of the world’s largest corporate holders of Bitcoin — the latest in a wave of radical financial reinventions pulling the likes of biotech firms and regional banks into crypto’s orbit.



Source link

Continue Reading

AI Insights

72% of teens are turning to AI for companionship, experts warn of risks involved

Published

on


Everywhere we look, Artificial Intelligence (AI) seems to be at the tips of our fingers– from Siri to Google’s Gemini, to ChatGPT.

Traditionally, teenagers looked to each other when it came to seeking advice, emotional support, or meaningful conversations.

However, new data released by Common Sense Media shows nearly three out of every four U.S. teenagers are now turning to AI to fill that void.

While AI can be beneficial when it comes to make-up tips, experts warn that using chatbots in place of human connection can pose huge risks.

“I think there’s some real benefits that teens might accrue from it, but there’s also some real concerns,” Bryan Victor, associate professor at Wayne State University’s School of Social Work, said. “The safeguards are really inadequate and have been documented time and again to be kind of easily circumvented by the user.”

There are two kinds of AI, including General Purpose– which is used for things like recipes or brainstorming around ideas– and Companion– where the chatbot is programmed to serve as a friend.

Where companionship AI becomes dangerous has a lot to do with their design and programming, according to Victor.

Companion bots are often trained to tell users what they want to hear without much pushback, while seeking persistent engagement and constantly asking follow up questions to keep users engaged.

“These are all design features that companies could really take action towards and change moving forward,” said Victor. “I think parents and broader society need to encourage them to do that.”

When it comes to the dangers of mental health challenges mixing with AI, Victor references the case of 16-year-old Adam Raine who committed suicide earlier this year after months of engagement with AI.

As more information comes out about the case it’s clear that ChatGPT consistently ignored a lot of warning signs that were being shared by Adam,” said Victor. “In some ways facilitated or pushed the youth closer towards making that decision.”

When it comes to protecting teens from the risk of AI, Victor has a few red flags to look out for:

  • Preferring interactions with AI over friends and family
  • Socially withdrawaling
  • Becoming preoccupied by AI



Source link

Continue Reading

Trending