Connect with us

AI Research

How AI tools are transforming the lives of people with disabilities

Published

on


For people with disabilities, artificial intelligence tools are helping them see, hear, experience, and move through the world in profound new ways.

Guests-

Kyle Keane, senior lecturer in Assistive Technologies School of Computer Science at the University of Bristol in England. Formerly Lecturer at Massachusetts Institute of Technology (MIT) in the Department of Materials Science and Engineering.

Tenzin Wangmo, senior researcher at the Institute for Biomedical Ethics, University of Basel.

Transcript

Part I

MEGHNA CHAKRABARTI: In 1824, 15-year-old Louis Braille invented a new alphabet. He was three years old when he’d injured one of his eyes with a leatherworking awl, and he had lost his sight completely by the time he was five. At 10, Braille received a scholarship to the National Institute for Blind Children in France, and it was there where he invented his new alphabet.

Braille is based upon a grid of dots, small enough to fit under the tip of your finger. These dots are numbered one through six, and when raised in different combinations, represent different letters, numbers, and punctuation marks.

This new technology allowed him and visually impaired people, world over, to read.

In 1916 at Bell Labs, Harvey Fletcher helped build the Western Electric hearing aid, the first functional electronic hearing aid. Then in 1939, Bell Labs was at it again, this time, finding ways to help people speak. Homer Dudley began demonstrating the voder, the world’s first electronic device that could generate continuous human speech.

Remember, this is 1939, and there’s a radio report from the time that captured an operator using the device to sing Auld Lang Syne.

(DEVICE SINGS)

CHAKRABARTI: Wow, that’s really something. Less than 100 years later, assistive technology for the voice took another huge step forward. In 2018, Virginia Democrat Jennifer Wexton, successfully unseated Republican representative Barbara Comstock, and in her election night Victory speech, Wexton lovingly looked at her children who were on stage with her.

JENNIFER WEXTON: I know it’s hard having me for a mom. That’s exactly right. But I do this for you. I do this for you because I want to make a better world for you and a better future for you, and I love you guys so much.

CHAKRABARTI: Now, we decided to play that because Wexton was just 50 years old at the time she gave that victory speech. It was barely five years later in 2023, that representative Wexton received a devastating diagnosis. She was diagnosed with progressive supranuclear palsy. That’s a neurodegenerative disease that causes slowing of muscle movements, tremors, balance and memory changes, and difficulty speaking, which you can hear, or you could hear when she announced her diagnosis to constituents in April of 2023.

WEXTON: Treatment process is one that involves time and commitment. So you’re going to see me have some good days, some days that are not so good, but I want you to know this, my head, my heart are 100% committed to serving people of Virginia.

I’m not gonna let Parkinson’s stop me from being me.

CHAKRABARTI: Just a year later, representative Wexton could not speak at all and needed a text to speech app to make herself heard on the House floor. Here she is in May of 2024.

Just a moment. Please. Thank you Madam Speaker. As you may know, last year I was diagnosed with Progressive supranuclear palsy or PSP. It’s basically Parkinson’s on steroids and I don’t recommend it. It’s affected my ability to speak, so I’m using this text to speech app to make it easier for you and our colleagues to hear and understand me.

CHAKRABARTI: But of course, that text to speech app allowed House members to understand her, but it didn’t sound anything like Wexton. So while she had her audible words, the representative had still lost her voice. That is until an AI company found a way to give it back.

WEXTON: I used to be one of those people who hated the sound of my voice when my ads came on TV, I would cringe and change the channel.

But you truly don’t know what you’ve got till it’s gone, because hearing the new AI of my old voice for the first time was music to my ears. It was the most beautiful thing I had ever heard, and I cried tears of joy. I’m not going to sugarcoat the difficulties —

CHAKRABARTI: That is representative Wexton speaking with an AI generated voice.

Her voice. A company called Eleven Labs used Wexton’s speeches and recordings before she received her diagnosis to generate a clone of her actual voice, and you heard her speak there in July of 2024. She still needs to input text into a device, but what comes out is her voice as she sounded before her diagnosis.

Now, Wexton ultimately decided not to run for reelection, and she used the AI technology, likely a historic moment, the first time anyone has delivered AI assisted remarks on the House floor, and she did that in her final speech to her congressional colleagues last December.

WEXTON: This has been a journey which has been so challenging yet one, which I am proud to have stood strong in.

And done my part to give hope and comfort to others facing similar battles. Our disabilities and our health struggles do not define who we are, and I feel more strongly than ever that it is so important to share that truth with the world.

To my family, Team Wexton and all the people of Virginia 10. Thank you.

I hope I’ve made you proud.

CHAKRABARTI: And by the way, we should note that representative Wexton in that, her voice might sound a little tinny because it is still coming through a device, and that device then is being picked up by the microphone at the podium on the House floor.

Now, representative Wexton’s voice is just one of the recent and profound breakthroughs in AI driven assistive technology. There are also now programs to help visually impaired people navigate highly complex spaces with incredible accuracy. In terms of prosthetics, bionic limbs have been become so advanced that there are reports of people even forgetting that they’re prosthetics at all.

So today, we do a lot of AI coverage on this show because it is profoundly changing just about everything about civilization and oftentimes in concerning ways. But today we are going to talk about one case in which AI is changing people’s lives for the better. And Kyle Keane joins us. He’s senior lecturer in Assistive Technologies at the School of Computer Science at the University of Bristol in England, and that’s where he join us.

Joins us from Bristol, UK. Kyle Keane, welcome to On Point.

KYLE KEANE: Thank you so much for having me, Meghna.

CHAKRABARTI: Kyle, I was wondering if you could start off by telling us a little bit about the fact that you are not only developing these new assistive technologies, but using them yourself as well.

KEANE: I am, yes. So, I am myself blind.

I have a degenerative retinal condition, so the back of the eye that processes light and then sends a signal up to the brain. My particular retina is very dynamically changing over the course of my life, which means I’ve had to adapt my career many times to adjust for a constantly changing sensory experience of the world.

CHAKRABARTI: Interesting. Can you tell me a little bit more, Kyle, because I understand that you weren’t born with this loss of sight, right? It has happened over time. And at what point did it become where you were really functionally blind? How old were you?

KEANE: The first time it was really affecting me in a way that I profoundly noticed was actually in elementary school, and it’s a bit of a funny thing. So the answer to the question would typically be, my vision started to change, and it affected me. But it was actually, I had a very proactive family and community who had heard about the diagnosis and spraying into action and started to expose me to assistive technologies like a large magnifier that was installed on my desk in elementary school so I could look at textbooks.

I actually don’t know whether my vision was affecting me at that point, because my social infrastructure had snapped into action so effectively that I kept functioning. I think that was because the supports were in place.

CHAKRABARTI: Oh, that’s interesting. And actually, it’s very heartening to hear actually, Kyle, because we tend to think of progressive disabilities as this sense of loss that can’t be filled.

But that’s not what you’re describing. And I’m really happy to hear that. But nevertheless, it’s interesting that you said okay, even a magnifying glass, we should consider it fairly as a form of assistive technology. So what other kinds of technologies have you used over the years?

KEANE: I was very much reliant on eyeglasses, which are another assistive technology that we don’t frequently speak of as an assistive technology because they’re incredibly mainstream.

We’ve accepted that a lot of people’s vision is variable and correctable, but that correction is due to being able to put a magnifying glass in front of the eye, a lens that focuses light and adapts the vision so that people can see in a similar way. So I was also very much using eyeglasses for the majority of my life and many other things, including color inversion on my screen to make bright things dark and light things bright, and changing the brightness on the screen.

And I really live a lot inside of computer land. And then I nowadays navigate with a white cane. Everywhere I go, I always have my white cane around to make sure that I understand the world around me.

CHAKRABARTI: Okay. And so again, just to help me and all of our listeners truly understand what your vision is in the absence of any of these technologies, like if you were just sitting down, like even having this conversation with me, I don’t know if you’re using any of these technologies, but what is your vision like right now?

KEANE: It’s a dangerous question, Meghna, honestly. (LAUGHS)

CHAKRABARTI: Oh, I’m sorry.

KEANE: It’s a big philosophical one and the danger is that I will talk for the next hour about this, but I’m going to try to keep it very short. It’s hard enough for me to describe my vision but it’s also hard for others to describe their vision.

So the way we talk about vision. And having a shared experience of the world, is often, Hey, can you see that thing there? So imagine we’re standing in front of a car and we both point at the car, and we say, can you see the car? And we both say, yes, we’re going to assume that we’re doing that with just our eyes, but there are people who also sense things using sound.

For instance, echolocation. To be able to bounce sound off of the car. And then they would also say, yes, of course. I see the car there. So there’s a really complex, philosophical thing. So I see what you would like me to do there. Unfortunately, I get very philosophical because I think it’s an important thing to realize that this, the world that we create together is a shared belief system.

Part II

CHAKRABARTI: Kyle, I have to say I almost laughed with delight, genuine delight when you challenged me on my question of, how do you see? Okay. No. I love being challenged that way. Because, no, because you’re exactly right. Like we should be asking ourselves what do we mean when we say things like see right? It is actually a philosophical question about the common, let’s say, presumptions we have about shared experiences.

So I love that. I love that. And it informs a lot of the questions I’m gonna ask you a little bit later in the show, the philosophical, even metaphysical (LAUGHS) of ways we have of experiencing the world. But, so let me refine my question alittle bit, and again, feel free to push back again, but okay, let’s take your car example.

If I were to ask you how are you absorbing information about the car, is it fair to say that the majority of that information is not coming through? Say, your optic nerve.

KEANE: Oh, I love the far more specific framing of that question.

CHAKRABARTI: (LAUGHS)

KEANE: I’m going to lean into much more, I think, the traditional, relatable answer, which is I will describe, if I’m standing, let’s say, and I’m gonna be a bit of a scientist here, so let’s say I’m standing across the street, a two lane street from a car that’s on the other side of the street, parked on the sidewalk.

If I am looking out in the direction of that car, I am likely to see maybe something that is like a very blurry, distorted, dynamically moving as if you’re, you know, when you’re in the desert and you’re looking over a really hot desert and the wind does all sorts of wiggly warping things. So my vision is constantly doing that and I would only be able to see maybe one of the tires.

And how would I know it’s a tire is not that it would look like anything that I ever used to call a tire, but there would be a dark circle raised up above some other blurry colored thing. And then there would be something like what I would call a hubcap, but that’s mostly because I’m constantly adapting to how I label whatever it is, this information I’m absorbing through my optic nerve, which I love the way that you phrased this, and three or four cars parked behind this car. I would only see the one wheel.

CHAKRABARTI: Okay. No, I really appreciate that because I firmly believe that in order to have a really strong and fair understanding of how any technologies are impacting positively or negatively people, it behooves us all to understand the frame of reference that person has. So thank you for that.

Okay. So tell me then a little bit more about how it is that your scholarship and your research, which I believe began in like in computer science, moved towards this world of AI and assistive technology.

KEANE: Can you help me by telling me how long you want me to talk about this? Because I do get it, be honest.

CHAKRABARTI: (LAUGHS) I’ll be honest. I keep saying that if I were able to have one of those podcasts that went on for three or four hours, I’d be like, let it rip Kyle. But unfortunately we don’t have that much time. So can you give me, I don’t know the one- or two-minute version of how you got into the AI aspect of assistive technology?

KEANE: Thank you for helping me focus in, because I do have a broad frame sometimes. So the AI components of assistive tech. I, the short version is I used to use a lot of very dynamic visual components on a computer. So I was a physicist, actually a computational physicist, and I was doing simulations of quantum systems.

And I would do that by modeling the system and then generating these visualizations of the data from the simulation and that I’d change parameters and I’d watch it dynamically change. And I knew how to label all these phenomenon. And as my vision started to go, I started to test what was available to people who are blind trying to do this.

And there was a lot of limits because a standard computer only has a screen and a speaker to make sound come out of it. And if you don’t have the screen anymore, you then rely on that speaker and the speaker has some limits. It can produce music, it can produce sounds, it can produce speech, and that is a lot of information that you can generate.

But I wasn’t quite able to recalibrate myself to extract all of the dynamic information that I was getting out of those visualizations. And so I started to work on spoken interfaces, and as I started to do text generation of these scientific simulations and trying to say, computer, describe to me what’s happening in the system.

I realized that I needed to do natural language processing. I needed the computer to process my speech and natural language generation. I needed the computer to speak meaningful, very scientifically accurate words to me so that I could interpret what was happening. And that was the nexus of realizing if I’m going to lean into speech and I want to be high precision, scientifically accurate, I’m going to need to find the right way to do that.

And so I went and worked at a software company and then started to build programs out in universities trying to research some of this stuff. And by a lot of luck, was able to get mentorship along the way and learn a lot about neural networks and artificial intelligence.

CHAKRABARTI: Okay. And I understand that part of this research journey that you took also led you to work for, what, a company or to help develop technologies that are now in like billions of people’s pockets in terms of developing Siri.

KEANE: Okay let me be more humble about the claim.

CHAKRABARTI: (LAUGHS)

KEANE: I wrote code that did in fact help Siri speak. But I definitely would not claim that I developed Siri, just to be very clear. But Siri had the backend would call out to many different services to answer different types of questions.

So this is very relevant even for the moment now as people can ask phones and computers, countless amounts of different types of questions. Some of those questions are of a very computable and scientifically accurate type. If I want to convert one currency to another currency using the conversion rate that’s happening right now, that’s a really specific scientific calculation.

If I want to know the distance to the moon divided by the length of the Amazon River. That is a really specific calculation that has a very specific answer. And as I started to push into these domains, it started to just open up worlds of kind of curiosities about how do we actually enable this really accurate thing.

And because I was concurrently working to try to help blind people get access to the same information. When spoken interfaces, we used to just chat with these things by typing stuff and then we spoke to them and then they would generate visual information and then at some point they started to speak to us and I got to take some of the code that I had been writing for blind people to be able to get access to these deep scientific answers and merge that into the backend.

That was answering a good portion of the quantitative questions that Apple Siri was sending to Wolfram Alpha.

CHAKRABARTI: I’m gonna admit Kyle, that I’ve got my iPhone at the table here and I’m eyeing it. I never use Siri, but I’m so temped to actually ask Siri, what is the distance to the moon divided by the length of the Amazon River?

KEANE: (LAUGHS)

CHAKRABARTI: I’ll save that for after the show. Okay so give some examples that would be good for us as a general audience to understand of what AI, whether it’s an LLM or some other kind of AI system or model, what it’s allowed you or other developers to do in terms of specific kinds of assistive technology.

KEANE: So moving into the assistive technology domain, is it okay if I just light speed warp ahead to thinking almost bleeding edge technology and what I want to build?

CHAKRABARTI: Oh, I was about to say yes, but how about do that second, because first, are there specific examples that you would turn to now and say, Hey, look, this is what we can do right now.

KEANE: Okay. It’s, yes. Yeah, that is definitely the case. So right now, if I turn on ChatGPT and I turn on the video mode. It will have a video showing through my camera, everything that’s around me, and I can ask it questions. This is something that wasn’t necessarily built as an assistive technology for blind people.

Although it happens to have incredible utility, because now if I happen to drop something, I can turn this on, point my phone in that direction and ask the system, Hey, do you see an iPhone on the ground over there or something like this. And if it’s a recognizable object, it will be able to describe what’s around and maybe even answer some questions about it.

So that is now a thing, there are certainly other assistive technologies, but as far as I’m concerned, that is one that has hit the market. It is now extremely wide stream available, and I think makes a massive impact in the lives of people who are blind and low vision like myself, as well as many other people can use these systems.

If you’re traveling and you’re in an unfamiliar place, you can ask a question that’s just an information access issue. We just happen to have a bunch of different use cases that blind people can now benefit from as well.

CHAKRABARTI: Okay. Okay. So that’s a really compelling example and we, as we started the show with the example of how speech generation for people who have lost the ability to speak, with their own physical voices has come so far.

I’m focusing on the senses here for a second, but what about hearing, is AI being used to develop new technologies for hearing.

KEANE: It is certainly; it’s a harder domain. So we are doing things like noise canceling headphones, let’s say. It’s not harder, it’s just a little harder to describe because it’s not so relatable every day, I think.

But in the noise canceling headphone, it used to just be a frequency filter that would remove certain types of things that were coming into a microphone. It would cancel those out before sending the signal to the ear, and now they can apply very sophisticated things like boost the natural human speech in the world around me, that is not just a mathematical operation that one can do by some type of a frequency filter.

That can now be accomplished by neural networks and artificial intelligence on devices that can allow people to filter out the background noise in a cafe and hear the person who’s speaking to them better. Because it’s actually selecting out the human speech in order to help filter it for the person to hear better.

CHAKRABARTI: Gosh, I would want that right now and technically speaking, when I go to the doctor, they don’t tell me yet that there’s anything out of the ordinary with my hearing. Which makes me wonder actually, Kyle, we’ve been framing this in terms of assistive technologies that could help people in various disability communities, but what you’re describing is technology. that it has the potential of changing everybody’s lives, right? It has the possibility of resetting what we all see, whether we’re in a disability community or not. As the kind of ways that we want to interact and engage with the world.

KEANE: Absolutely, thank you for framing it and driving it there.

Because this is a really important thing. So I’ll call it out in two ways. One is we sometimes frame situational disabilities, so that is a moment in which your hands are full and you want to be able to get through a door that has a impossible to grip doorknob. So now you may not think of yourself as traditionally and permanently disabled, but because you’re in a situation in which you can’t engage with the world, you are now situationally disabled.

So that’s one place where frequently context does matter, and we get put into situations where if the situation was different, we may be able to accomplish a task, but the circumstances have made it difficult.

And a second framing of this is if you and I think about what we are capable of doing, we can do all sorts of things. And the world record model of this. So if you think about everybody who’s ever accomplished a world record, and could I do that? No, like I just, I do believe we can do a lot more than we think we can, but I genuinely don’t think I could beat every single world record that’s on file, and that’s because humanity as a collective is capable of immense things.

And sometimes I want to do stuff that is in that direction and it’s not something that I may be presently yet naturally gifted at, but technology may be able to help boost my performance and help me to accomplish a particular activity. And sometimes that is, everyone can relate to that.

If we want to strive to do things that are beyond what we can currently do, which hopefully lots of us do. Technology can empower that and can allow us to do that. And that is just a technology that happens to be assistive. And I think that’s a certain type of technology we should be advocating for in the world. And also not forgetting that an assistive technology can be something very specific for a person with a very explicit disability.

CHAKRABARTI: Yes. No, totally agreed. And in fact, the way you just described it that gets us to a place that I want to talk about with you in a few minutes about the ethical questions that go along with these incredible advances in AI driven assistive technology.

But we have about a minute here before our next break. (LAUGHS) So I’ll give you a minute to start with your bleeding edge technologies that you wanted to talk about.

KEANE: Oh, how can I tease the thing? I’m going to go with the word spatial. Spatial reasoning, there’s advanced capabilities that humans have and we’ve been, psychologists have been doing an incredible job and educators have been doing an incredible job putting down records of people’s ability to do things and figuring out curriculums to help them do that.

One particular capability is spatial reasoning, and some of us have it to different degrees than others, but if I say, is the car across the street that we are talking about, is it to the left or the right from your vantage point of the blue car? We don’t actually have any cars we’re looking at right now, but that is it to the left or the right question becomes a spatial reasoning task. And if you say, if I rotated that car upside down, would it look like this or like this, if I sat it on its nose, or if I sat on its back, which of those means that it’s rotated upside down?

These are questions of spatial reasoning, and I really want to help AI understand these things.

Part III

CHAKRABARTI: Kyle, I have to say I was a little stumped as to why spatial awareness, right? And understanding if a particular object in our hypothetical example is to the left or to the right of a car that then is rotating in space. Why is that important? What would it allow us to do?

KEANE: Absolutely. And I have a surprisingly simple use case that this comes from, which is I like to use phones when I get confused.

I use cameras to help navigate me around the world. And so every once in a while if I want to do route planning and I can take a photo and I can say, Hey, I wanna get to this fabric store on the other side of the street and it says, okay, walk to the blue car and I can’t see blue.

And I have to say, okay, is that to the left or the right of the giant truck that I hear? And if it tells me the wrong thing, I’m gonna head off in the wrong direction. So I need the systems to be able to report out with really specific and accurate information, the directionality of things, so that I can orient myself relative to the information that it’s building.

Because that AI system, when it tells me information, is helping me build a model of the world. That I’m going to interact with. And so if it’s giving me unreliable information, then I’m gonna take actions that may have consequences.

CHAKRABARTI: Okay. So that is really interesting because then that brings us to one of the questions of, I don’t know if I’d call it the limitations of AI, but maybe an inherent challenge in the technology itself.

For example, we were looking around for what other people within various communities were talking about in terms of AI and assistive technology, and there’s a company out there, which I’m sure you know about called Fable. And I think they are, they’re an accessibility testing platform that’s developed by people with disabilities.

And someone from that company had said that a major challenge with AI is that artificially intelligent systems, in their words, have no concept whatsoever of accessibility of disability or needs that differ. And I raised that because you just said like in our hypothetical use case, that if the system then spits out to you, go to the left of the blue car.

But blue isn’t something that’s of useful information to you. Is that a description of one of the inherent challenges within how AI systems themselves are processing information about the world?

KEANE: Okay. I’m gonna do a two-prong thing. One is to give a mild shout to some of the companies, the mainstream AI companies that are working with companies to ensure that their AIs are actually situationally aware and able to compensate for people’s varying needs. So there are, I’ve done tests with this video mode of ChatGPT. If I say, help me get out of this room. Sometimes it’ll say, walk towards the exit sign. And I’ll have to say, oh yeah, sorry, you’ve forgotten that I’m blind.

Can you please give me non-visual cues? And it’ll say, oh yes, of course. Walk over to your left, you’re going to hit a wall and then follow the wall, and you will find a doorway that is the exit door. So these systems are actually quite capable of learning how to differentiate different needs. They need a little bit of context sometimes, but they’re technologically capable and are already exhibiting capabilities of adaptability and personalization.

So just to give a shout out to the main companies that are actually adding that in and putting a lot of effort to do. And then the second bit, sorry, can you remind me of the question? I got a little bit distracted by wanting to defend the fact that AI. Oh capabilities, so we have to choose what capabilities our systems have, and we have to benchmark them and ensure that they’re prioritized.

If you’re building a massive technology that should be able to do, you know, anything, it should answer any question about any photo that has ever been taken, which at this point, as we’re wearing cameras around, is going to be essentially every possible photo you could imagine on Earth.

Every possible question about every possible photo from every vantage point on Earth. This is a massive piece of technology. If you were to actually try to write out every one of those questions and make sure that your system could answer every one of those questions, you would never get done running the test.

You would never get done training the system. So this is one of the promises of artificial intelligence, is that you can have a subset of tasks that the system is capable of, that you benchmark it on, and by some level of greater than the sum of its parts can start to do unexpected stuff.

And so that is the intelligence in my opinion of these systems, is you don’t have to tell it every single thing it needs to do. You need to tell it, if you do these, and maybe it’s a lot, if you do these 37 million things, then it turns out that you can actually do 475 billion things. The orders of magnitude of these things can be huge, but we still have to pick a really strategic subset to ensure that it can generalize out and do many unpredictable, fantastic things.

CHAKRABARTI: Okay, so you know what, this really takes us straight into what we should embrace with, sorry, I can’t, I’m having trouble putting sentences together today. Help me, someone needs to give me a technology to do my job better, but this takes us straight to some of the ethical concerns that come with all advances in AI technology.

And I don’t think assistive technology is at all immune to this, because, for example, you were talking about how we don’t need to solve every problem in order to have a tool be useful, but at the same time, like using AI. Problems, depending on the inputs put into the AI, problems can be not just solved more quickly, but they can also be scaled up almost instantaneously as well.

Let me give you just a real time example that’s going on this week. Not in the assistive space. Just in the general AI space over on Twitter/X. Elon Musk didn’t like what their AI was doing, Grok, and so he like had it look at different information and just this week, all of a sudden Grok is spitting out wildly antisemitic stuff. It’s even calling itself ‘MechaHitler.’

And so that was an almost instantaneous tone and content change in a large language model that’s reaching billions of people based on a couple of tweaks that one guy wanted to make. And that might sound like an outlandish example, Kyle, but I think it actually is apropos because in the assistive space, we ought to be a million times more sensitive to potential errors like that. How do we prevent that from happening? Bad things getting bad fast.

KEANE: Yeah. And there is, it can be even more high stakes. So if I’m using, I have to make a decision, whether I’m to develop my own human capabilities.

Or whether I’m to rely on technology. I have a, one of my best friends is Lindsay Lina, who absolutely helped me adapt to vision loss and to very functionally transition into to blindness. And we used to have this great game, which was called bats versus bots. And it would be the moment where we arrive at a new thing that I need to achieve.

And I was so technologically reliant on these systems and she would always kind of very playfully in an incredibly loving way, say, alright, how about we try bats versus bots? I’m just gonna go do it as a human and let’s see who gets it done first. And she would frequently win. And every once in a while I would be able to win the game because my technology, my bot did something really incredible and she’d say, okay, yeah, that was pretty cool.

And then lo and behold, the next time we play the game in that same scenario, my bot wouldn’t do the same thing, and then it becomes this terrifying moment of now I’m not only worrying about capability, but I’m depending on this thing as I’m trying to develop my approach and ability to move into the world.

And if I’m reliant on a technology and it’s not actually going to be stably held as a priority inside of a company that I’m relying on to use their technology, they can take that feature away at any moment, right? That’s not just an inconvenience. Sometimes that is, can be life or death. If I’m out on the road and somebody suddenly switches off a capability that I’ve used to travel the world, and I do travel quite a bit and I use lots of technology to do it.

The reason I have so much technology is because I have backups upon backups of different types of adaptations that will allow me to get out of any given scenario because I don’t feel I can rely on anything except my own training as a human and knowing that I can recover from different scenarios.

Another incredible friend who’s helped me along the way is Daniel Kish, who teaches echolocation. He’s taught me discover and recover is key. And make sure that you have contingency plans. So it is a really life or death situation and it becomes really high priority if you think about the children who are needing to be trained up.

And I go through this really intensely when I have collaborators and I’m incredibly blessed to be working with a lot of local schools for the blind, and it’s because their teachers know that I will not do something until I trust that system is gonna be reliable. For the rest of that child’s life, because that’s a huge ask of a teacher to set their child down a path using something that they might not be able to rely on in one year, 10 years, 20 years.

It’s hard to plan out some of those timelines.

CHAKRABARTI: Yeah, no, I really appreciate that example. Because these are exactly the issues that that we all need to be grappling with regarding AI. But then, as you said there, their life and death issues. When it comes to accessibility, the child examples fascinate me because I also think similar things apply on the other end of life, right?

Regarding how assistive technologies can help elderly people. And to that end, just listen along with me to Tenzin Wangmo, who’s a senior researcher at the Institute for Biomedical Ethics at the University of Basel. Her work focuses on gerontology and tech, and she says that data collection is actually one of her largest concerns when it comes to AI and assistive tech.

Things like monitoring programs that might collect data in order to keep someone safe, but that data might be shared in ways that users don’t necessarily understand.

TENZIN WANGMO: Do I want to share the data? Who do I want to share the data? My privacy is in many ways not respected. My children may want my information, but do they need my information?

Do I want to share it and how much I want to share it?

CHAKRABARTI: For example, what about devices aimed at helping people with mobility? And here’s a hypothetical that she gave us. One that’s based on actual concerns of people she’s interviewed for her research.

WANGMO: A cane, as long as you don’t put AI in it, and I haven’t seen a cane with an AI, but I’m sure somebody would have it in the future.

A cane helps you with walking, right? But then if you have a cane that has AI. It would tell your child or caregiver where you are walking, who you are seeing, and all the other information, which an older person may or may not want to share, right? May feel embarrassed to share, or may find it too intrusive.

CHAKRABARTI: So this raises the issue of privacy, which comes along with any technology and especially AI, because she says if your device has any kind of AI component to it, you are never truly alone.

WANGMO: Would an older person who probably may have dementia, would they be able to differentiate that it is a robot and not a human person, or it is not my child, but if we personalize the robot to talk like the child.

So we are looking into all of these issues of can we personalize the robot? But then I ask questions in my research to all the person saying, okay, imagine if we have a robot that looks like a human robot, humanoid robot, and maybe speaks to you like your child or anybody … your friend or somebody, how would you feel about it?

CHAKRABARTI: Now many of these questions obviously do not have clear answers, and sometimes the needs of people with disabilities may not align perfectly with the supervisory needs of their caregivers. And data collection is in fact necessary for assistive technology to work, but she’s concerned that users still lack control over what happens to that data.

WANGMO: I’m also thinking these ethical issues will change in the future. What we find important may not be important later for all generations. At the same time, each context would have their own specific issues that comes out, that we probably would not know at the moment. Because all these technologies are new, they are becoming more intelligent, and they might become more sophisticated that we just don’t know how to tackle all these issues.

CHAKRABARTI: So that’s Tenzin Wangmo, senior researcher at the Institute for Biomedical Ethics at the University of Basel. Kyle, we’ve only got a couple of minutes left and there are two questions I wanna ask you. First of all, I just love to hear your response to some of the concerns that Tanzin, the ethical concerns that she’s raising there.

KEANE: Yeah they’re fundamental, essential. We need to grapple with them. If I am to say anything, it’s to add on the extra context of privacy and social stigma become a real issue. So if I’m walking around with a camera in order to help me do things in the world, often butting up against other people’s sense of a desire for privacy.

And then there’s this really complicated situation where the thing that helps me is actually not only perhaps using my data or having issues with my own privacy, it’s actually infringing on others. And these are really big and important questions that we need to discuss. I don’t have any answers for them other than I’m really glad that they’re being raised and considered.

CHAKRABARTI: So then my last question is and sadly we only have a minute, this also seems to be an opportunity for members of various disability communities to actually be at the forefront of coming up with ethical frameworks for the uses of these technologies. Don’t you think?

KEANE: Yes. Just absolutely 100%.

If I can do one little shout out, there is a company that does, I will leave them anonymous as not to accidentally advertise, but they are, they’re doing really ethical research practices. They offer people the service for free in exchange for having that data be part of an AI training algorithm.

And they’re doing that as an opt-in with the ability to opt out within 72 hours. So if you’re in fact giving your data over, you should have complete control over that. So I want to just shout out for people to push back on the research methodologies.

The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Data centers for AI could require power equivalent to five Hoover Dams

Published

on


Across the country, Americans are using the internet at every hour of every day. According to a 2024 Pew Research Poll, 96% of adults reported using the internet at least occasionally on a mobile device. That number has risen gradually since May 2000, when just 48% reported occasional use. With more people online, energy providers have begun preparing for a higher demand for electricity.

“The internet use was overstated, as it turns out, at least in the early going. And then it caught up, and we saw the consumptive use later,” Constellation President and CEO Joseph Dominguez said.

But when it comes to new artificial intelligence, Dominguez says widespread usage happened almost immediately and has expanded faster than the internet boom. White House A.I. and Crypto Czar David Sacks agrees.

“The adoption is faster than any previous technology. It’s faster than the internet, it’s faster than the iPhone. So, it’s being adopted very quickly,” Sacks said. “Still, roughly half the public hasn’t tried it yet.”

Fox News Polling shows 57% of registered voters rarely or never use artificial intelligence. Twenty-seven percent said they use the technology daily. Usage could be driven by their opinion of the technology. Those who saw A.I. as bad for society were less familiar with it and said they used it rarely (77%). Those who consider A.I. a good thing used it more regularly (47%). Experts believe A.I. use will only increase.

“OpenAI’s ChatGPT, when they launched, was the fastest-growing adoption of any consumer technology product ever back in November 2022, but that’s a drop in the bucket as to what they have now,” said senior advisor Gregory Allen with the Wadhwani A.I. center at the Center for Strategic and International Studies.

In order to supply the increasing demand and continue advancing A.I. technology, data centers are providing a 24-hour connection.

ARTIFICIAL INTELLIGENCE FUELS BIG TECH PARTNERSHIPS WITH NUCLEAR ENERGY PRODUCERS

A graph of annual energy consumption (Fox News / Fox News)

“Running all of these computational resources that modern A.I. needs requires an awful lot of electricity,” Allen said.

A.I. models are frequently trained to remain relevant. Software requires regular updates and new data centers need large cooling systems to keep everything running. Allen says the largest A.I. algorithms will require between 1 and 5 gigawatts of electricity to operate.

“One gigawatt is about one Hoover Dam’s worth of electricity. So, imagine five Hoover Dams being used to just power one data center full of one company’s A.I.,” Allen said.

The growing complexity and need for updated infrastructure has put a strain on available resources.

“Data centers have become very large. So when you think about it, we need land that needs to be zoned. We need to get permits so that we can build these facilities, and we need to bring more electricity,” Microsoft President and Vice Chair Brad Smith said.

Data centers are often clustered in certain areas. According to the Northern Virginia Regional Commission, the area’s 250 facilities handle around 70% of global internet traffic. In areas with high concentration, tech companies can face delays in connecting to the grid. Overseas, some countries and localities have placed restrictions on how many data centers can be built. Stateside, Dominguez says President Donald Trump has taken some actions to help speed up some of the permitting processes.

“The executive orders are now cutting through a lot of the red tape, and effectively we’re not required to do things that we were required to in the past,” Dominguez said.

ELECTRICITY PRICES SPIKE FOR AMERICAN HOUSEHOLDS: HERE’S WHAT’S DRIVING COSTS HIGHER

Facebook parent Meta Platforms will invest $800 million in a nearly 1-million square foot hyperscale data center in Kansas City, Missouri. (Meta/Kansas City Area Development Counci)

Before a nuclear site is built, producers are required to obtain an early site permit that checks geology, site conditions and whether a new facility can be built. 

“It makes sense if you’ve never built a nuclear reactor in that place before. But in our case, we have existing reactors that have operated in these communities for decades,” Dominguez said. “Currently the NRC regulations require us to go through a laborious exercise that costs about $35 million a pop to verify what we already know and that is that nuclear could go there. As a result of the president’s executive orders, that’s no longer gonna be required.”

Once a nuclear site is up and running, future data centers could also plug in directly to the site. Electricity would be in constant supply.

“It runs like a freight train day or night, winter or summer, regardless of weather condition,” Dominguez said.

Nuclear plants operate at full capacity, more so than any other energy source, making it a reliable choice for tech companies.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Steam coming out of a nuclear power plant

Susquehanna Nuclear Power Plant in Salem Township, Pennsylvania. (Fox News / Fox News)

“Nuclear power is a good source of electricity for A.I. and many other things as well,” Smith said. “In the United States, we’ve gone many decades without adding new sources of nuclear power.”

U.S. reactors supply nearly 20% of the nation’s power. The 93 nuclear generators create more electricity annually than the more than 8,000 wind, solar and geothermal power plants combined. Dominguez said that 24/7 energy supply may never be necessary and having a mix of sources is important. Constellation also develops solar energy along with nuclear.

“We have to develop 20 times as much solar to get the same impact as one megawatt of nuclear energy,” Dominguez said.



Source link

Continue Reading

AI Research

E-research library with AI tools to assist lawyers | Delhi News

Published

on


New Delhi: In an attempt to integrate legal work in courts with artificial intelligence, Bar Council of Delhi (BCD) has opened a one-of-its-kind e-research library at the Rouse Avenue courts. Inaugurated on July 5 by law minister Kapil Mishra, the library has various software to assist lawyers in their legal work. With initial funding of Rs 20 lakh, BCD functionaries told TOI that they are also planning the expansion of the library to be accessed from anywhere.Named after former BCD chairman BS Sherawat, the library boasts an integrated system, including the legal research platform SCC Online, the legal research online database Manupatra, and an AI platform, Lucio, along with several e-books on law across 15 desktops.Advocate Neeraj, president of Central Delhi Bar Court Association, told TOI, “The vision behind this initiative is to help law practitioners in their research. Lawyers are the officers of the honourable court who assist the judicial officer to reach a verdict in cases. This library will help lawyers in their legal work. Keeping that in mind, considering a request by our association, BCD provided us with funds and resources.”The library, which runs from 9:30 am to 5:30 pm, aims to develop a mechanism with the help of the evolution of technology to allow access from anywhere in the country. “We are thinking along those lines too. It will be good if a lawyer needs some research on some law point and can access the AI tools from anywhere; she will be able to upgrade herself immediately to assist the court and present her case more efficiently,” added Neeraj.Staffed with one technical person and a superintendent, the facility will incur around Rs 1 lakh per month to remain functional.With pendency in Delhi district courts now running over 15.3 lakh cases, AI tools can help law practitioners as well as the courts. Advocate Vikas Tripathi, vice-president of Central Delhi Court Bar Association, said, “Imagine AI tools which can give you relevant references, cite related judgments, and even prepare a case if provided with proper inputs. The AI tools have immense potential.”In July 2024, ‘Adalat AI’ was inaugurated in Delhi’s district courts. This AI-driven speech recognition software is designed to assist court stenographers in transcribing witness examinations and orders dictated by judges to applications designed to streamline workflow. This tool automates many processes. A judicial officer has to log in, press a few buttons, and speak out their observations, which are automatically transcribed, including the legal language. The order is automatically prepared.The then Delhi High Court Chief Justice, now SC Judge Manmohan, said, “The biggest problem I see judges facing is that there is a large demand for stenographers, but there’s not a large pool available. I think this app will solve that problem to a large extent. It will ensure that a large pool of stenographers will become available for other purposes.” At present, the application is being used in at least eight states, including Kerala, Karnataka, Andhra Pradesh, Delhi, Bihar, Odisha, Haryana and Punjab.





Source link

Continue Reading

AI Research

Optimized Artificial Intelligence Responds to Search Preferences Survey

Published

on


83% of survey respondents prefer AI search over traditional Googling. LLMO agency, Optimized Artificial Intelligence, calls it the “new default,” not a trend.

(PRUnderground) July 9th, 2025

A new survey reported by “Innovating with AI Magazine” confirms what forward-looking brands have already begun to suspect: 83% of users say they now prefer AI search tools like ChatGPT, Perplexity, and Claude over traditional Googling.(1) For Optimized Artificial Intelligence, a leading AI optimization agency founded by SEO veteran Damon Burton, this marks not a momentary shift but the dawn of a new default in digital behavior.

“This survey isn’t surprising. It’s validating,” said Burton, Founder of Optimized Artificial Intelligence and President of SEO National. “Consumers are clearly signaling that they no longer want to wade through pages of links. They want direct, synthesized answers, and they’re finding them through AI search platforms. That changes the entire playbook for SEO.”

The “Innovating with AI Magazine” report notes that ChatGPT now sees over 200 million weekly active users and that Google’s market share has dipped below 90% for the first time in nearly a decade. Tools like Microsoft’s Copilot, Claude by Anthropic, and Perplexity AI are redefining how information is retrieved and who gets cited.

Brands Can’t Rely on Legacy Search Alone

Optimized Artificial Intelligence has been at the forefront of large language model optimization (LLMO), a strategic evolution of SEO that prepares content not just for ranking on SERPs but for retrieval, citation, and trust in generative AI tools.

“The reality is, most businesses are still optimizing for a search engine that’s disappearing from user behavior,” said Burton. “Google isn’t dying, but it’s being re-prioritized. If your content isn’t LLM optimized by being structured, cited, and semantically relevant, you’re already losing opportunities.”

OAI’s proprietary approach to LLMO, also called generative engine optimization (GEO), includes:

  • Entity-first schema structuring
  • Semantic content clustering for LLM retrieval
  • Platform-specific tuning for ChatGPT, Gemini, Claude, Copilot, Perplexity, and more
  • Reputation signal optimization to increase brand inclusion in AI-generated summaries

Why This Matters for the Future of Discovery

The “Innovating with AI Magazine” report also highlights challenges: hallucinations, misinformation, and a lack of third-party visibility. But Burton argues this is precisely why strategy matters now more than ever.

“Hallucinations are a technical challenge, but they’re also a signal. LLMs choose what they cite based on structure, clarity, and trust. If your brand isn’t showing up in AI-generated responses, it’s not because AI search is broken. It’s because your content isn’t optimized for how these models think.”

Call to Action for Forward-Thinking Brands

As Google cannibalizes its own SERPs in favor of AI Overviews and third-party visibility continues to shrink, Burton urges brands to adapt and fast: “This is the end of traditional SEO as we knew it. But it’s the beginning of something better: precision-targeted, AI-friendly optimization that earns trust, not just traffic.”

To learn more about SEO for AI search engines and how to get found and cited across platforms like ChatGPT, Claude, Gemini, Perplexity, and Copilot, visit www.OptimizedArtificialIntelligence.com.

(1) https://innovatingwithai.com/is-ai-search-replacing-traditional-search/

About Optimized Artificial Intelligence

Optimized Artificial Intelligence offers tailored AI solutions designed to enhance business operations and drive growth. Their services include developing custom AI models, automating workflows, and providing data-driven insights to help businesses make informed decisions.​

The post Optimized Artificial Intelligence Responds to Search Preferences Survey first appeared on

Original Press Release.



Source link

Continue Reading

Trending