Connect with us

AI Insights

Xiaomi Founder’s Bold EV Bet Is Paying Off Where Apple’s Failed

Published

on




Lei Jun, founder and chairman of Xiaomi Corp., the only tech company to have successfully diversified into carmaking, couldn’t resist.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Why AI is never going to run the world

Published

on


The secret to human intelligence can’t be replicated or improved on by artificial intelligence, according to researcher Angus Fletcher.

Fletcher, a professor of English at The Ohio State University’s Project Narrative, explains in a new book that AI is very good at one thing: logic. But many of life’s most fundamental problems require a different type of intelligence.

“AI takes one feature of intelligence – logic – and accelerates it. As long as life calls for math, AI crushes humans,” Fletcher writes in the book “Primal Intelligence.”

“It’s the king of big-data choices. The moment, though, that life requires commonsense or imagination, AI tumbles off its throne.  This is how you know that AI is never going to run the world – or anything.”

Instead, Fletcher has developed a program to help people develop their primal intelligence, a program that has been successfully used with groups ranging from the U.S. Army to elementary school students.

At its core, primal intelligence is “the brain’s ancient ability to act smart with limited information,” Fletcher said.

In many cases, the most difficult problems people face involve situations where they have limited information and need to develop a novel plan to meet a challenge.

The answer is what Fletcher calls “story thinking.”

“Humans have this ability to communicate through stories, and story thinking is the way the brain has evolved to work,” he said.

“What makes humans successful is the ability to think of and develop new behaviors and new plans. It allowed our ancestors to escape the predator.  It allows us to plan, to plot our actions, to put together a story of how we might succeed.”

Humans have four “primal powers” that allow us to act smart with little information.

Those powers are intuition, imagination, emotion and commonsense. In the book, Fletcher expands on each of these and the role they have in helping humans innovate.

In essence, he says these four primal powers are driven by “narrative cognition,” the ability of our brain to think in story. Shakespeare may be the best example of how to think in story, he said.

Fletcher, who has an undergraduate degree in neuroscience and a PhD in literature, discusses in the book how Shakespeare’s innovations in storytelling have inspired innovators well beyond literature. He quotes people from Abraham Lincoln to Albert Einstein to Steve Jobs about the impact reading Shakespeare had on their lives and careers.

Many of Shakespeare’s characters are “exceptions to rules” rather than archetypes, which encourages people to think in new ways, Fletcher said.

What Shakespeare has helped these pioneers – and many other people – do is see stories in their own lives and imagine new ways of doing things and overcoming obstacles, he said.

That’s something AI can’t do, he said.  AI collects a lot of data and then works out probable patterns, which is great if you have a lot of information.

“But what do you do in a totally new situation? Well, in a new situation you need to make a new plan. And that’s what story thinking can do that AI cannot,” he said.

The U.S. Army was so impressed with Fletcher’s program that it brought him in to help train soldiers in its Special Operations unit.  After seeing it in action, the Army awarded Fletcher its Commendation Medal for his “groundbreaking research” that helped soldiers see the future faster, heal quicker from trauma and act wiser in life-and-death situations.

In the book, Fletcher gave an example of how one Army recruit used his primal intelligence to overcome obstacles in the most literal sense.

As part of its curriculum, Army Special Operations had a final test for recruits: an obstacle course of logs and ropes. The recruits were told they had the ring the bell at the end of the course before time expires in order to pass the test.

This particular recruit knew he couldn’t beat the clock. At the starting line, he thought of a new plan: he ran around the obstacle course, rather than through it, ringing the bell in record time.

While other military schools would have flunked him, Special Operations passed him, based on his ingenuity in passing the test, Fletcher said.  As the Army monitored his career after graduation, it found he outperformed many of his classmates on field missions.

The value of primal intelligence works in all walks of life, including business. While business often emphasizes management, Fletcher said primal intelligence shines when leadership is needed.

“Management is optimizing existing processes. But the main challenge of the future is not optimizing things that already work,” Fletcher said.

“The challenge of the future is figuring things out when we don’t know what works. That’s what leadership is all about, and that’s what story thinking is all about.”

In business and elsewhere, Fletcher said AI has a role. But it should not be seen as a replacement for human intelligence.

“Humans are able to say, this could work but it hasn’t been tried before. That’s what primal intelligence is all about,” he said.

“Computers and AI are only able to repeat things that have worked in the past or engage in magical thinking. That’s not going to work in many situations we face.”





Source link

Continue Reading

AI Insights

Should You Use ChatGPT For Therapy?

Published

on


Sharing how you’re feeling can be frightening. Friends and family can judge, and therapists can be expensive and hard to come by, which is why some people are turning to ChatGPT for help with their mental health.

While some credit the AI service with saving their life, others say the lack of regulation around it can pose dangers. Psychology experts from Northeastern said there are safety and privacy issues posed by someone opening up to artificial intelligence chatbots like ChatGPT.

“AI is really exciting as a new tool that has a lot of promise, and I think there’s going to be a lot of applications for psychological service delivery,” says Jessica Hoffman, a professor of applied psychology at Northeastern University. “It’s exciting to see how things are unfolding and to explore the potential for supporting psychologists and mental health providers in our work. 

“But when I think about the current state of affairs, I have significant concerns about the limits of ChatGPT for providing psychological services. There are real safety concerns that people need to be aware of. ChatGPT is not a trained therapist. It doesn’t abide by the legal and ethical obligations that mental health service providers are working with. I have concerns about safety and people’s well-being when they’re turning to ChatGPT as their sole provider.”

The cons

It’s easy to see the appeal of confiding in a chatbot. Northeastern experts say therapists can be costly and it’s difficult to find one.

“There’s a shortage of professionals,” Hoffman says. “There are barriers with insurance. There are real issues in rural areas where there’s even more of a shortage. It does make it easier to be able to just reach out to the computer and get some support.” 

Chatbots can also serve as a listening ear.

“People are lonely,” says Josephine Au, an assistant clinical professor of applied psychology at Northeastern University. “People are not just turning to (general purpose generative AI tools like) ChatGPT for therapy. They’re also looking for companionship, so sometimes it just naturally evolves into a therapy-like conversation. Other times they use these tools more explicitly as a substitute for therapy.”

However, Au says these forms of artificial intelligence are not designed to be therapeutic. In fact, these models are often set up to validate the user’s thoughts, a problem that poses a serious risk for those dealing with delusions or suicidal thoughts.

There have been cases of people who died by suicide after getting guidance on how to do so from AI chatbots, one of which prompted a lawsuit. There are also increasing reports of hospitalizations due to “AI psychosis,” where people have mental health episodes triggered by these chatbots. OpenAI added more guardrails to ChatGPT after finding it was encouraging unhealthy behavior.

The American Psychological Association warned against using AI chatbots for mental health support. Research from Northeastern found that people can bypass the language model’s guardrails and use it to get details on how to harm themselves or even die by suicide. 

“I don’t think it’s a good idea at all for people to rely on non-therapeutic platforms as a form of therapy,” Au says. “We’re talking about interactive tools that are designed to be agreeable and validating. There are risks to like what kind of data is generated through that kind of conversation pattern. A lot of the LLM tools are designed to be agreeable and can reinforce some problematic beliefs about oneself.”

This is especially pertinent when it comes to diagnosis. Au says people might think they have a certain condition, ask ChatGPT about it, and get a “diagnosis” from their own self-reported symptoms thanks to the way the model works. 

But Northeastern experts say a number of factors go into getting a diagnosis, such as examining a patient’s body language and looking at their life more holistically as they develop a relationship with a patient. These are things AI cannot do.

“It feels like a slippery slope,” says Joshua Curtiss, an assistant professor of applied psychology at Northeastern University. “If I tell ChatGPT I have five of these nine depression symptoms and it will sort of say, ‘OK, sounds like you have depression’ and end there. What the human diagnostician would do is a structured clinical assessment. They’ll ask lots of follow-up questions about examples to support (you’ve had) each symptom for the time criteria that you’re supposed to have it to, and that the aggregate of all these symptoms falls underneath a certain mental health disorder. The clinician might ask the patient to provide examples (to) justify the fact that this is having a severe level of interference in your life, like how many hours out of your job is it taking? That human element might not necessarily be entrenched in the generative AI mindset.”

Then there are the privacy concerns. Clinicians are bound by HIPAA, but chatbots don’t have the same restrictions when it comes to protecting the personal information people might share with it.  OpenAI CEO Sam Altman said there is no legal confidentiality for people using ChatGPT.

“The guardrails are not secure for the kind of sensitive information that’s being revealed,” Hoffman says of people using AI as therapists. “People need to recognize where their information is going and what’s going to happen to that information. Something that I’m very aware of as I think about training psychologists at Northeastern is really making sure that students are aware of the sensitive information they’re going to be getting as they work with people, and making sure that they don’t put that in any of that information into ChatGPT because you just don’t know where that information is going to go. We really have to be very aware of how we’re training our students to use ChatGPT. This is like a really big issue in the practice of psychology.”

The pros

While artificial intelligence poses risk when being used by patients, Northeastern experts say certain models could be helpful to clinicians when trained the right way and with the proper privacy safeguards in place.

Curtiss, a member of Northeastern’s Institute for Cognitive and Brain Health, says he has done a lot of work with artificial intelligence, specifically machine learning. He has research out now that found that these types of models can be used to help predict treatment outcomes when it comes to certain mental health disorders. 

“I use machine learning a lot with predictive modeling where the user has more say in what’s going on as opposed to large language models like the common ones we’re all using,” Curtiss says. 

Northeastern’s Institute for Cognitive and Brain Health is partnering with experiential AI partners to see if they can develop therapeutic tools.

Hoffman says she also sees the potential for clinicians to use artificial intelligence where appropriate in order to improve their practice.

“It could be helpful for assessment,” Hoffman says. “It could be a helpful tool that clinicians use to help with intakes and with assessment to help guide more personalized plans for therapy. But it’s not automatic. It needs to have the trained clinician providing oversight and it needs to be done on a safe, secure platform.” 

For patients, Northeastern experts say there are some positive uses of chatbots that don’t require using them as a therapist. For example, Au says these tools can help people summarize their thoughts or come up with ways to continue certain practices their clinicians suggest for their health. Hoffman suggests it could also be a way for people to connect with providers.

But overall, experts say it’s better to find a therapist than lean on chatbots not designed to serve as therapeutic tools.

“I have a lot of hopes, even though I also have a lot of worries,” Au says. “The leading agents in commercialization of and monetization of mental health care tools are people, primarily people in tech, venture capitalists and researchers who lack clinical experience and not practicing clinicians who understand what psychotherapy is as well as patients. There are users who claim that these tools have been really helpful for them (to) reduce the sense of isolation and loneliness.  I remain skeptical about the authenticity of these because some of this could be driven by money.”

Society & Culture

Recent Stories



Source link

Continue Reading

AI Insights

The $2 trillion AI revolution: How smart factories are rewriting the rules – Smart Industry

Published

on



The $2 trillion AI revolution: How smart factories are rewriting the rules  Smart Industry



Source link

Continue Reading

Trending