Connect with us

Tools & Platforms

What Stanford Learned By Crowdsourcing AI Solutions for Students With Disabilities

Published

on


What promise might generative artificial intelligence hold for improving life and increasing equity for students with disabilities?

That question inspired a symposium last year, hosted by the Stanford Accelerator for Learning, which brought together education researchers, technologists and students. It included a hackathon where teachers and students with disabilities joined AI innovators to develop product prototypes.

The ideas and products that came out of the symposium were summarized in a white paper released recently by Stanford. The concepts included how AI can help in early identification of learning disabilities. co-designing products for students with disabilities alongside the young people who will be using them.

EdSurge sat down with Isabelle Hau, executive director of the Stanford Accelerator for Learning, to hear more. This interview has been edited for length and clarity.

EdSurge: I really liked this idea of designing for the edges, for students who are on the edges and whose needs may typically be overlooked.

Isabelle Hsu: That’s also my favorite piece.

There is a long history of people with disabilities having innovated essentially at the margins for specific topics and issues that they’re facing, but then those innovations benefiting everyone.

Text-to-speech is a clear one, but there are so many examples of this in our world-at-large. What we were hoping for with this event is that if we start thinking about people who have very special needs, those innovations that are coming are also going to end up benefiting a lot more people than we could have ever imagined. So it is a really interesting idea here of leveraging this incredible technology that allows for more precision and more showing learner variability in a way that could benefit everyone at some point.

Right, and I think I’ve heard that concept also in urban design. If you design for people who get around differently, maybe you’re designing for people who use electric wheelchairs or people who don’t have a car, all of the designs end up benefiting everybody who uses the roads.

Exactly. Angela Glover Blackwell invented this term called “curb-cut effect,” where if you have these roads where you have a curb for people with wheelchairs, then it also benefits people who may have a cart or who may have a stroller. I love that term.

This idea of designing for every student without letting them be defined by their limitations, and for those solutions to ultimately be implemented in the real world, it seemed kind of daunting. Did this feel daunting at the time of the symposium, or among the groups when this was being discussed? Just from reading the report, I felt like, ‘Oh my gosh, this is such a high hill to climb.’ Did it ever feel that way during the collaboration?

I don’t remember the feeling of daunting. The feeling that I had was actually quite different. It was more like inspiration, gratitude for having an event where people felt seen and heard, and also people feeling like they were working on a big topic. You have this feeling of being part of the solution and the gratitude and empowerment that comes with it.

Everyone was asked to participate and contribute, and everyone had great contributions, coming at it from different perspectives or levels of expertise. For example, we had teachers who may not have been tech experts, and then we had tech experts who don’t have any classroom experience, but everyone contributed meaningfully with their own viewpoints.

From what I’ve reported on about serving students with disabilities, a lot of it has revolved around lack of resources and the question of, ‘How do we get those resources so that teachers can do their job better?’ The solution is more resources, but how to get those resources is never really quite solved. So that’s great to hear that people felt that energized and hopeful, and they were obviously coming up with solutions rather than my experience, which is writing about the deficits.

Exactly. I don’t want to sound too naive. They are aware, of course, of conversations about the existing system and its limitations — the fact that we have a system that has certain regulations, but then the funding is not always in place for the appropriate support.

We had a wonderful man named David Chalk who spoke about his experience having gone through the education system, a man with dyslexia and his horrific, horrific experience in the education system throughout his life. And he learned how to read at age 62.

He was speaking so vividly about how he was bullied in school and how the school system really didn’t work for his own needs. David is working on an AI tool that addresses some of those challenges. So you see what I mean? Certainly there was a lot more focus on thinking about the future and future solutions that could bring some hope and make a positive impact in many people’s lives, but coming out from just pretty miserable experiences with the education system.

Could you give an example of, if I was a student at a school that adopted these concepts of using AI to increase access for students with disabilities, a change that I might see in my day-to-day life as a result?

Let me take the example of David for a moment. So if young David were going through the education system, ideally with this vision that we laid out: David would have been identified with one of those assessment tools much, much, much earlier than age 62. Ideally closer to first grade or even pre-K.

There’s an entire category of innovators, including one from Stanford, working on extremely interesting assessment tools that support the assessment, the early identification of dyslexia. And what it does for someone like David is, if you’re identified with dyslexia much earlier than age 62 — obviously this is a little extreme here in the case of David — but you can have then specialized supports and avoid what a lot of kids and families are currently going through, which is situations where kids are notified much later, and then those kids are losing their self-esteem and confidence.

And what David was describing as bullying, I’ve heard it from many other cities where, when a child can’t read because they are dyslexic, it’s not because they’re not smart. They’re super smart. It is just that they need different special support. If you’re notified of those needs earlier, the child can then get to reading and develop amazing skills in a much faster way. And also all these social-emotional skills that come with building confidence and self-esteem can then be built alongside reading skills.

At Stanford, we are building not only the assessment — we call it the ROAR, the Rapid Online Assessment of Reading — we also are building right now another tool that we also highlighted in the report called Kai. That’s a reading support tool. So both the assessment, but also the reading interventions in classrooms for children who are more struggling with learning how to read.

There’s a whole section in the report about AI and Individualized Education Programs for students with disabilities. Is AI’s role going to be more about automation? Is that the way that people are envisioning it, by helping educators more effectively develop the IEPs?

There were a lot of conversations because there are some clear applications of AI for IEPs. Let me just give you one specific example, actually the winner of the hackathon. Obviously this was a very early prototype in one day, but it was essentially providing a translation layer to families and parents on what the IEP actually meant.

We take for granted that when a parent receives the IEP, we understand it, but this is sometimes actually complicated for families to have an understanding of what the teacher or the school meant. So this tool was essentially adding some ways for families to understand what the IEP actually [contains], and also added some multilingual translations and other things which AI is quite good at.

There was another person in the room who was working on another tool that I think is beyond efficiency. It also gets into almost effectiveness rather than efficiency, where the teacher who has one or multiple children with IEP can then be supported through AI on different interventions that we may want to think about. It’s not meant to be prescriptive to teachers, but more supportive in providing different sets of recommendations. Let’s say a child with ADHD and a child with visual impairment. How do you address those different needs in a classroom? So different types of recommendations for teachers.

The existing systems are, because the diversity of learning differences almost by definition makes it very complicated for us humans and teachers in particular to tackle those learning differences in the classroom, there may be ways that AI can also provide ways to be also more effective with teaching practices.

Reading about programs like Kai, which was developed by a Stanford professor to give personalized reading feedback to students with disabilities, there was a lot of mention in the report of AI analyzing student data. How is the way that these teams or these innovators are thinking about uses for AI, the data analysis of students, the reports that AI is able to generate — how is that different from how non-AI edtech tools have been generating reports and generating data up to this point?

There are multiple layers. One is that you potentially have access to a much wider range of information. I would caution on this, but this is the hope with some of those tools that you have access to a much broader set of information that then helps you with more specific learning differences similar to health or a specific disease. One hope is that the access to much larger datasets than edtech companies were able to leverage.

The other difference between edtech and generative AI capabilities is that you then have this generation, which is these inferences that you can make from big data, that can help us humans or make us better at different types of activities. Our view at Stanford is that we will never replace the humans, but we can help inform. Let’s [say] a general ed teacher who has one or multiple children with different learning differences for the first time, but that teacher can actually have recommendations that are tailored to their platform [using AI].

So that’s very different from even the top-notch edtech adaptive tools that existed before generative AI capabilities that were a lot more static as opposed to being able to really tailored to a particular context, not just giving you the information, but generating those recommendations on how you could use it based on your very specific classroom, where you can say, ‘Isabel has visual impairment, and Catherine has struggles here on certain math concepts.’ It’s very specific. You could not do this before, even with adaptive technologies, which were more personalized tools.

I was very interested in the section on this idea on using AI for needs identification. You just mentioned using this ambient data to help identify disabilities earlier. And I wanted to bring up the idea of privacy.

Even just on my day-to-day usage of the internet, it feels like we’re always being tracked, there’s always some kind of monitoring going on.

How do these AI innovators balance all the possibilities that AI could bring, analyzing these large swaths of data that we didn’t have access to, versus privacy and maybe this feeling of always being watched and always being analyzed, especially with student data? Do you ever feel like you have to pull people back who are too excited and say, ‘Hey, think about the privacy of the students in this?’

These are huge, huge issues — this one on privacy and then the other one is security. And then the other one is on incorrect inferences, which also could add to potentially further minoritizing some specific population.

Privacy, security is a huge one. I’m noticing that with a lot of our school district partners that obviously this is top of mind and obviously it’s regulated, but the big issue that exists right now is that those systems give everyone the feeling that it’s a private interaction with a machine. So you are in front of a computer or phone or a device and you are in front of a chat right now, the interaction with a chatbot. And it has this really interesting sense that it’s a private secure relationship, when in fact it’s not. It’s a public one, highly public one unless the data are secure in some ways.

I think that schools have been doing, over the past two years, an excellent job at training everyone, and I see it at Stanford, too. You have more and more secure environments for AI use, but I would say this is heightened, of course, for children with learning differences given the sensitivity about the information that may be shared. I think the number one concern here is privacy and security of those data.

One of the early concerns about the use of AI in education is the racial bias that AI tools can have because of how the data is trained. And then of course, we know that students with disabilities or learning differences also face stigma. How do you think about preventing potential bias in AI from identifying or maybe over identifying for certain populations that are already overrepresented in learning disabilities?

[Bias] is an issue with learning differences that has been well documented by research, including my very dear colleague Elizabeth Kozleski, who has done exceptional work on this, which is called disproportionality. Meaning there are certain subgroups, especially for racial and ethnic groups, that are overrepresented in the assessment of learning differences. This is a critical [issue] in AI because AI takes historical data, the entire body of data that we have built over time, and in theory the future, based on the historical data.

So given that this historical data have been demonstrated to have meaningful biases based on certain demographic characteristics, I think that this is a really, really important question that you’re raising. I haven’t seen data on views of AI with learning differences, on whether they are biased or not, but certainly we have done at Stanford a lot of work, including at least three or four [years] in education showing that there are some meaningful biases of those existing systems.

I think this is an area where tech developers are actually eager to do better. It’s not like they want to have biases remain. So this is an area where research can actually be very helpful in improving practices of tech developers.

As you mentioned, there were people participating in the summit who do have learning differences. Do you think that’s important to curbing any biases that might exist?

It’s actually the entire benefit of this effort that we led is a concept of co-designing with and for learners with learning differences, with lived experience. Huge. I saw it during the hackathon, where we had asked for volunteers from friends at Microsoft and Google and other big tech companies, some of them were sharing that they had some learning differences growing up. So that gives me hope that there are actually some in those big tech companies, and they are also interested in working on those particular topics and making them better not only for themselves, but also for broader communities.

What do you think were some of the most critical ideas that came out of the report? What did you really feel impacted by?

Clearly the importance of co-design, which we already discussed. There’s one other theme that I think is really hopeful, and it’s connected to universal design for learning.

AI is evolving toward multimodal. What I mean by this is that you have more and more AI for video and audio in addition to text. That is one of the strong recommendations of the universal design for learning framework. For example, if you have hearing or visual impairment or other types of learning differences, you need different modalities. So I actually think this is an area of great hope with these technologies. The fact that it is inherently and moving toward this aspect of multimodal could actually benefit more learners.

That falls right in line with this idea of differentiation is what students need to succeed rather than the one-size-fits-all.

Exactly, and literally one of the core recommendations of the UDL framework is to have multimodal approaches, and this technology does it. I don’t want to also sound like I’m a Pollyanna, but there are some risks we discussed, but this is one of the areas that AI is squarely aligned with the UDL framework and that we could not do without this technology. This technology could actually bring some new possibilities for a broader set of learners, which is very hopeful.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

5-Week AI Mentorship for Startups in SF

Published

on

By


OpenAI has unveiled a new initiative aimed at nurturing the next generation of artificial intelligence innovators, marking a strategic push into talent development amid intensifying competition in the AI sector. The program, dubbed OpenAI Grove, targets early-stage entrepreneurs who are either pre-idea or in the nascent phases of building AI-focused companies. According to details shared in a recent announcement, the five-week mentorship scheme will be hosted at OpenAI’s San Francisco headquarters, providing participants with hands-on guidance from industry experts and access to cutting-edge tools.

The program’s structure emphasizes practical support, including technical assistance, community building, and early exposure to unreleased OpenAI models. As reported by The Indian Express, participants will have opportunities to interact with new AI tools before their public release, fostering an environment where budding founders can experiment and iterate rapidly. This comes at a time when AI startups are proliferating, with OpenAI positioning itself as a hub for innovation rather than just a technology provider.

A Strategic Move in AI Talent Cultivation OpenAI’s launch of Grove reflects a broader effort to secure its influence in the rapidly evolving AI ecosystem, where retaining and attracting top talent is crucial. By offering mentorship to pre-seed founders, the company aims to create a pipeline of AI-driven ventures that could potentially integrate with or complement its own technologies. Recent posts on X highlight enthusiasm from the tech community, with users noting the program’s potential to accelerate startup growth through exclusive access to OpenAI’s resources.

Industry observers see this as OpenAI’s response to competitors like Anthropic and Grok, which have also been aggressive in talent acquisition. The first cohort, limited to about 15 participants, is set to run from October 20 to November 21, 2025, with applications closing on September 24. As detailed in coverage from CNBC, the initiative includes in-person sessions focused on co-building prototypes with OpenAI researchers, underscoring a hands-on approach that differentiates it from traditional accelerator programs.

Benefits and Broader Implications for Startups Participants in Grove stand to gain more than just technical know-how; the program promises a robust network of peers and mentors, which could be invaluable for fundraising and scaling. Early access to unreleased models, as mentioned in reports from NewsBytes, allows founders to test ideas with state-of-the-art AI capabilities, potentially giving them a competitive edge in a market where speed to innovation is key.

This mentorship model aligns with OpenAI’s history of fostering external ecosystems, similar to its past investments in startups through funds like the OpenAI Startup Fund. However, Grove appears more focused on individual founders, particularly those without formal teams or funding, addressing a gap in the startup support system. Insights from The Daily Jagran emphasize how the program could help participants raise capital or refine their business models, drawing on expert guidance to navigate challenges like ethical AI development and market fit.

Challenges and Future Outlook While the program has generated buzz, questions remain about its scalability and inclusivity. With only 15 spots in the initial cohort, selection will be highly competitive, potentially favoring founders with existing connections in the tech world. Recent news on X suggests mixed sentiments, with some praising the initiative for democratizing AI access, while others worry it might reinforce Silicon Valley’s dominance in the field.

Looking ahead, OpenAI plans to run Grove multiple times a year, potentially expanding its reach globally. As covered in TechStory, this could evolve into a cornerstone of OpenAI’s strategy to build a supportive community around its technologies, much like how Y Combinator has shaped the broader startup world. For industry insiders, Grove represents not just a mentorship opportunity but a signal of OpenAI’s commitment to shaping the future of AI entrepreneurship, ensuring that innovative ideas flourish under its umbrella.

Potential Impact on the AI Innovation Ecosystem The introduction of Grove could catalyze a wave of AI startups, particularly in areas like generative models and ethical AI applications, by providing resources that lower barriers to entry. Founders selected for the program will benefit from personalized feedback loops, helping them avoid common pitfalls in AI development such as data biases or scalability issues.

Moreover, this initiative underscores OpenAI’s evolution from a research lab to a multifaceted player in the tech industry. By mentoring early-stage talent, the company may indirectly fuel advancements that enhance its own ecosystem, creating a virtuous cycle of innovation. As the AI sector continues to mature, programs like Grove could play a pivotal role in distributing expertise more evenly, empowering a diverse array of entrepreneurs to contribute to technological progress.



Source link

Continue Reading

Tools & Platforms

San Antonio Spa Unveils First AI-Powered Robot Massager

Published

on

By


In the heart of San Antonio, a quiet revolution in wellness technology is unfolding at Float Wellness Spa on Fredericksburg Road. The spa has become the first in the city to introduce the Aescape AI-powered robot massager, a device that promises to blend cutting-edge artificial intelligence with the ancient art of massage therapy. Customers lie face-down on a specialized table, where robotic arms equipped with sensors scan their bodies to deliver personalized treatments, adjusting pressure and techniques in real time based on individual anatomy and preferences.

This innovation arrives amid a broader surge in AI applications within the health and wellness sector, where automation is increasingly tackling labor shortages and consistency issues in human-delivered services. According to a recent feature by Texas Public Radio, the Aescape system at Float Wellness Spa uses advanced algorithms to map muscle tension and provide targeted relief, marking a significant step for Texas in adopting such tech.

Technological Backbone and Operational Mechanics

At its core, the Aescape robot employs a combination of 3D body scanning, machine learning, and haptic feedback to simulate professional massage techniques. Users select from various programs via a touchscreen interface, and the system adapts on the fly, much like a therapist responding to subtle cues. This isn’t mere gimmickry; it’s backed by years of development, with the company raising substantial funds to refine its precision.

In a March 2025 report from Bloomberg, Aescape secured $83 million in funding from investors including Valor Equity Partners and NBA star Kevin Love, underscoring investor confidence in robotic wellness solutions. The technology draws from earlier prototypes showcased at events like CES 2024, where similar AI-driven massage robots demonstrated personalized adaptations to user needs.

Market Expansion and Local Adoption in San Antonio

The rollout in San Antonio follows successful debuts in cities like Los Angeles, as detailed in a December 2024 piece by the Los Angeles Times, which described the experience as precise yet impersonal. At Float Wellness Spa, appointments are now bookable, with sessions priced competitively to attract a mix of tech enthusiasts and those seeking convenient relief from daily stresses.

Posts on X, formerly Twitter, reflect growing public intrigue, with users like tech influencer Mario Nawfal highlighting the robot’s eight axes of motion for deep-tissue work without the awkwardness of human interaction. This sentiment aligns with San Antonio’s burgeoning tech scene, where AI innovations are intersecting with local industries, as noted in recent updates from the San Antonio Express-News.

User Experiences and Industry Implications

Early adopters in San Antonio report a mix of awe and adjustment. One reviewer in a Popular Science article from March 2024 praised the Aescape for its customized convenience, likening it to “the world’s most advanced massage” powered by AI that learns from each session. However, some note the absence of human warmth, a point echoed in an Audacy video report from August 2025, which captured the robot’s debut turning heads in the city.

For industry insiders, this represents a pivot toward scalable wellness tech. With labor costs rising and therapist shortages persistent, robots like Aescape could redefine spa economics, potentially expanding to chains like Equinox. Yet, challenges remain, including regulatory hurdles for AI in healthcare-adjacent fields and ensuring data privacy for body scans.

Future Prospects and Competitive Dynamics

Looking ahead, Aescape’s expansion signals broader trends in robotic automation. A Yahoo Finance piece from August 2025 introduced a competing system, RoboSculptor, which also leverages AI for massage, hinting at an emerging market rivalry. In San Antonio, this could spur further innovation, with local startups like those covered in Nucamp’s tech news roundup exploring AI tools in customer service and beyond.

As AI integrates deeper into personal care, ethical questions arise—will robots supplant human jobs, or augment them? For now, Float Wellness Spa’s offering provides a tangible glimpse into this future, blending Silicon Valley ingenuity with Texas hospitality. Industry watchers will be keen to monitor adoption rates, as success here could accelerate nationwide rollout, transforming how we unwind in an increasingly automated world.



Source link

Continue Reading

Tools & Platforms

California AI Regulation Bill SB 1047 Stalls Amid Tech Lobby Pushback

Published

on

By


California’s ambitious push to regulate artificial intelligence has hit another snag, with key legislation stalled amid fierce debates over innovation, safety, and economic impact. Lawmakers had high hopes for 2025, building on previous efforts like the vetoed SB 1047, but recent developments suggest a familiar pattern of delay. According to a report from CalMatters, the state’s proposed AI safety bill, SB 53, which aimed to impose strict testing and oversight on advanced models, remains in limbo as Governor Gavin Newsom weighs his options. This comes after a year of intense lobbying from tech giants and startups alike, highlighting the tension between fostering cutting-edge tech and mitigating potential risks.

The bill’s provisions, including mandatory safety protocols for models trained with massive computational power, have drawn both praise and criticism. Proponents argue it could prevent catastrophic misuse, such as AI-driven cyberattacks or misinformation campaigns, while opponents warn it might stifle California’s tech dominance. Newsom’s previous veto of similar measures cited concerns over overregulation, a sentiment echoed in recent industry feedback.

The Political Tug-of-War Intensifying in Sacramento

As the legislative session nears its end, insiders point to behind-the-scenes negotiations that have bogged down progress. Sources from White & Case LLP note that while some AI bills, like the Generative AI Accountability Act, were signed into law effective January 1, 2025, broader safety frameworks face resistance. This act requires state agencies to conduct risk analyses and ensure ethical AI use, but it stops short of comprehensive private-sector mandates. Meanwhile, posts on X from tech figures like Palmer Luckey express relief over potential federal pre-emption, suggesting that national guidelines might override state efforts to avoid a patchwork of rules.

The delay’s roots trace back to economic pressures. California’s tech sector, home to Silicon Valley heavyweights, contributes massively to the state’s GDP. A Inside Global Tech analysis reveals that over a dozen AI bills advanced this session, covering consumer protections and chatbot safeguards, yet core safety bills like SB 53 are caught in crossfire. Industry leaders argue that vague liability clauses could drive companies to relocate, with estimates from X discussions indicating potential job losses in the thousands.

Economic Ramifications and Industry Pushback

Compliance costs are a flashpoint. A study referenced in posts on X by Will Rinehart, using large language models to model expenses, projects that firms could face $2 million to $6 million in burdens over a decade for automated decision systems under bills like AB 1018. This has mobilized opposition from companies like Anthropic, which paradoxically endorsed some regulations but lobbied against overly burdensome ones, as reported by NBC News via X updates. Startups, in particular, fear being crushed under regulatory weight that Big Tech can absorb, with TechCrunch highlighting how SB 243’s chatbot rules could set precedents for accountability without derailing innovation.

Governor Newsom’s decision looms large, influenced by his national ambitions and the state’s budget woes. Recent web searches show a June 2025 expert report, The California Report on Frontier AI Policy, informing revisions to make the bill less “rigid,” per Al Mayadeen English. Yet, delays persist, with critics on X platforms like @amuse warning that California risks ceding AI leadership to China if regulations become too stringent.

Looking Ahead: Innovation vs. Safeguards

The holdup underscores a broader national debate. While California has enacted laws on deepfakes and AI transparency—such as AB 2013 requiring training data disclosure, as detailed by Mayer Brown—comprehensive AI governance remains elusive. Experts predict that without resolution by year’s end, federal intervention could preempt state actions, a scenario favored by some X commentators like Just Loki.

For industry insiders, this delay offers a reprieve but also uncertainty. Companies are already adapting, with some shifting operations to states like Texas for lighter oversight. As Pillsbury Law outlines, the 18 new AI laws effective in 2025 focus on sectors like healthcare and elections, yet the absence of overarching safety nets leaves gaps. Ultimately, California’s AI regulatory saga reflects the high stakes: balancing technological progress with societal protection in an era where AI’s potential—and perils—are only beginning to unfold.



Source link

Continue Reading

Trending