Tools & Platforms
Noninvasive brain technology allows control of robotic hands with thought
NEWYou can now listen to Fox News articles!
Noninvasive brain tech is transforming how people interact with robotic devices. Instead of relying on muscle movement, this technology allows a person to control a robotic hand by simply thinking about moving his fingers.
No surgery is required.
Instead, a set of sensors is placed on the scalp to detect brain signals. These signals are then sent to a computer. As a result, this approach is safe and accessible. It opens new possibilities for people with motor impairments or those recovering from injuries.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
PARALYZED MAN SPEAKS AND SINGS WITH AI BRAIN-COMPUTER INTERFACE
A woman wearing non-invasive brain technology (Carnegie Mellon University)
How noninvasive brain tech turns thought into action
Researchers at Carnegie Mellon University have made significant progress with noninvasive brain technology. They use electroencephalography (EEG) to detect the brain’s electrical activity when someone thinks about moving a finger. Artificial intelligence, specifically deep learning algorithms, then decodes these signals and translates them into commands for a robotic hand. In their study, participants managed to move two or even three robotic fingers at once, just by imagining the motion. The system achieved over 80% accuracy for two-finger tasks. For three-finger tasks, accuracy was over 60%. All of this happened in real time.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Meeting the challenge of finger-level control
Achieving separate movement for each robotic finger is a real challenge. The brain areas responsible for finger movement are small. Their signals often overlap, which makes it hard to distinguish between them. However, advances in noninvasive brain technology and deep learning have made it possible to pick up on these subtle differences.
The research team used a neural network called EEGNet. They fine-tuned it for each participant. Because of this, the system allowed for smooth, natural control of the robotic fingers. The movements closely matched how a real hand works.
A robotic finger being controlled by non-invasive brain technology (Kurt “CyberGuy” Knutsson)
Why noninvasive brain tech matters for everyday life
For people with limited hand function, even small improvements can make a huge difference. Noninvasive brain technology eliminates the need for surgery because the system is external and easy to use. In addition, this technology provides natural and intuitive control. It enables a person to move a robotic hand by simply thinking about the corresponding finger movements.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
The accessibility of noninvasive brain technology means it can be used in clinics and homes and by a wide range of people. For example, it enables participation in everyday tasks, such as typing or picking up small objects that might otherwise be difficult or impossible to perform. This approach can benefit stroke survivors and people with spinal cord injuries. It can also help anyone interested in enhancing their abilities.
What’s next for noninvasive brain tech?
While the progress is exciting, there are still challenges ahead. Noninvasive brain technology needs to improve even further at filtering out noise and adapting to individual differences. However, with ongoing advances in deep learning and sensor technology, these systems are becoming more reliable and easier to use. Researchers are already working to expand the technology for more complex tasks.
As a result, assistive robotics could soon become a part of more homes and workplaces.
Illustration of how the noninvasive brain technology works (Carnegie Mellon University)
Kurt’s key takeaways
Noninvasive brain technology is opening up possibilities that once seemed out of reach. The idea of moving a robotic hand just by thinking about it could make daily life easier and more independent for many people. As researchers continue to improve these systems, it will be interesting to see how this technology shapes the way we interact with the world around us.
CLICK HERE TO GET THE FOX NEWS APP
If you had the chance to control a robotic hand with your thoughts, what would you want to try first? Let us know by writing us at Cyberguy.com/Contact
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
Copyright 2025 CyberGuy.com. All rights reserved.
Tools & Platforms
U.S. State Courts Cautiously Approach AI Despite Efficiency Promises and Staffing Crises
A new survey of state courts reveals a striking paradox in the American judicial system: Even though courts face severe staffing shortages and operational strain, they remain reluctant to adopt generative artificial intelligence technologies that could provide significant relief.
The Thomson Reuters Institute’s third annual survey of state courts, conducted in partnership with the National Center for State Courts AI Policy Consortium, found that 68% of courts reported staff shortages and 48% of court professionals say they do not have enough time to get their work done.
Despite these pressures, however, just 17% say their court is using gen AI today.
Courts Under Strain
The survey, which gathered responses from 443 state, county, and municipal court judges and professionals between March and April 2025, paints a picture of courts under significant strain.
Seventy-one percent of state courts and 56% of county/municipal courts experienced staff shortages in the past year, with 61% anticipating continued shortages in the next 12 months.
This staffing crisis translates into demanding work schedules, with 53% of respondents saying they work between 40 and 45 hours a week on average, and an additional 38% working over 46 hours a week.
Perhaps most telling, only half of court professionals said they had enough time to get their work done.
These workload pressures are only getting worse. Nearly half of respondents (45%) reported an increase in their caseloads compared to last year and 39% said the issues they are dealing with have become more complex.
Meanwhile, 24% of respondents reported increases in court delays, compared to 18% who reported decreases.
AI Adoption Remains Limited
Against this backdrop of operational strain, the survey reveals a cautious approach to AI adoption that seems at odds with the technology’s potential benefits.
Currently, only 17% of respondents said their court was using gen AI, and an additional 17% said their court was planning to adopt gen AI technology over the next year.
This slow adoption occurs despite widespread recognition of AI’s transformative potential, with 55% of respondents rating AI and gen AI as having a transformational or high impact on courts over the next five years.
The survey found that AI and gen AI is the highest-ranking impactful trend, rated as transformational or high impact by 55% of respondents.
Court professionals clearly see the efficiency benefits AI could provide. Court professionals predict that in the next year, gen AI will help them save an average of nearly three hours a week, rising to nearly nine hours a week within five years.
The projected time savings could be substantial: Respondents estimate they will save an average of nearly three hours every week in the next year, growing to nearly six hours each week within three years and 8.8 hours each week within five years.
Barriers to AI Implementation
So what is keeping courts back? The survey identifies several factors contributing to courts’ cautious AI adoption.
Seventy percent of respondents said their courts are currently not allowing employees to use AI-based tools for court business, and 75% of respondents said their court has not yet provided any AI training.
There are also varied but significant concerns about AI implementation.
More than a third (35%) are worried that AI will lead to an overreliance on technology rather than skill, while a quarter have concerns about malicious use of AI, such as counterfeit orders and evidence. Interestingly, only 9% were worried about widespread job loss resulting from AI.
Budget constraints may also play a role in limiting technology adoption. The survey found that 22% say their budget for the next year increased, while 30% said budgets decreased, and 30% say budgets stayed the same.
Current Technology Landscape
While AI adoption lags, courts have made progress implementing other technologies. Most courts have adopted key technologies, including case management (86%), e-filing (85%), calendar management (83%), and document management (82%).
Video conferencing has reached near-universal adoption at 88%.
However, some technology gaps remain. Beyond gen AI, the most common technologies set to be adopted next are legal self-help portals, online dispute resolution and document automation.
Virtual Hearings Widely Adopted
The survey shows significant adoption of virtual hearings, with 80% of respondents saying their court conducts or participates in virtual hearings.
In more than 40% of all jurisdictions, virtual hearings are available for first/initial appearances, preliminary/status hearings and/or motion hearings.
Virtual hearings appear to improve court efficiency in some areas. 58% of respondents reported that virtual courts decrease failure to appear rates, and 84% reported that virtual courts increase access to justice.
However, the digital divide presents ongoing challenges. Nearly one in five respondents (19%) feel that the majority of litigants are experiencing decreased access to justice because they lack strong technology skills.
Court access for people with lower digital literacy and fewer technical support resources were ranked as the top challenges for litigants involved in virtual hearings.
Cybersecurity Concerns
As courts increasingly rely on technology, cybersecurity emerges as a critical concern. The survey reveals significant variation in confidence levels regarding IT security.
While 57% of respondents feel highly confident in their IT systems’ security, an alarming 22% of respondents say they are “not at all confident” in the security of their IT systems.
Generational Workforce Changes
The survey identifies generational workforce shifts as another major factor affecting courts. Baby Boomers and Gen Xers exiting the workplace, along with Gen Zers entering the workforce and Millennials moving into leadership positions, are trends frequently ranked as transformational or high impact.
These demographic changes have important implications for technology adoption. As the report notes, Gen Zers are digital natives who are very comfortable using technology and may find it easier to manage automated workflows, while they may be resistant to jobs and tasks that still rely heavily on manual tasks.
Reducing Operational Errors
The survey provides insights about task efficiency and error rates in court operations.
Entering and updating data in court management systems was rated as both the most error-prone task by a wide margin and also as the second-most inefficient task. This finding suggests that greater use of automation in CMS entry could yield major improvements in both efficiency and error rates.
The survey also found correlations between different operational challenges. Tasks that are more stressful are also correlated with causing inconvenience for court users, suggesting that addressing workflow inefficiencies could simultaneously improve both staff satisfaction and user experience.
A Critical Juncture for Courts
The survey suggests that courts face a strategic choice: embrace AI technologies that could significantly alleviate operational pressures, or risk falling further behind as staffing challenges intensify and workloads continue to grow.
“We’re facing challenges — staff don’t think they have enough time to meet their demands, and they’re working more hours to get the work done, and that’s leading to burnout,” said David Slayton, executive officer and clerk of court for the Superior Court of Los Angeles County.
“It’s incumbent on court leaders to really think about how technology can help us with this problem.”
Mike Abbott, head of Thomson Reuters Institute, underscored the urgency of the situation.
“Courts are facing an unprecedented convergence of change, driven by generative AI and generational shifts in their workforce, at the same time as they continue to deal with staff shortages, backlogs and delays,” Abbott said.
“AI literacy can empower the courts to understand both the risks and the opportunities associated with the technology, enabling them to identify the best use cases which help them focus on higher value work.”
Tools & Platforms
State AI leaders gather at Princeton to consider how the technology can improve public services
Much of the news about artificial intelligence has focused on how it will change the private sector. But all around the country, public officials are experimenting with how AI can also transform the way governments provide essential services to citizens while avoiding pitfalls.
State AI leaders, including Gov. Phil Murphy of New Jersey, gathered at Princeton University in June to discuss how AI offers ways for government to be more efficient, effective, and transparent, especially at a time when budgets are strapped and economic uncertainty has slowed down hiring.
Hosted by Princeton’s Center for Information Technology (CITP), the NJ AI Hub, the State of New Jersey, the National Governors Association, the Center for Public Sector AI, GovLab, and InnovateUS, the conference brought together more than 100 AI leaders from 25 states to share ideas and collaborate. The meeting was conducted under an agreement of confidentiality to allow participants to discuss progress and concerns openly. Quotations in this story are used by permission.
What emerged was enthusiasm about AI’s potential to reduce the time government employees spend on manual tasks and improve their ability to engage citizens, as well as concerns about how best to use public data to innovate and increase equity rather than undermine it.
The gathering is just one of the ways that CITP – which is a joint center of the Princeton School of Public and International Affairs and Princeton Engineering – is leading on AI. The center also holds policy precepts to engage policymakers in AI governance at the SPIA in DC Center, and several affiliated faculty teach courses on AI policy at Princeton SPIA.
“There’s a clear recognition of the need for thinking about public accountability and equity,” said Princeton’s Arvind Narayanan. “At the same time, I think there’s also recognition of the potential for governments if we get this right.”
At the conference, CITP Director Arvind Narayanan noted that attendees were focused on practical implementation of AI tools rather than the “polarizing conversations around AI that dominate the media.” He also explained why public-facing deployments of AI by state governments have been slower than internal ones.
“There’s a clear recognition of the need for thinking about public accountability and equity. At the same time, I think there’s also recognition of the potential for governments if we get this right,” said Narayanan, who is also a professor of computer science and co-author of “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.”
Speakers shared big and small ways that AI is improving government. Some noted saving an hour or two a week per employee by leveraging AI to help draft grant applications, assess legislation, or review procurement policies while ensuring oversight and accuracy. One city automated the summarization of council oral votes, a task that was previously completed by a city clerk, creating summaries of 20 years of council books in a short period of time at nearly zero cost. As a result, voters have a simpler way to access information and hold elected officials accountable.
In his remarks, Gov. Phil Murphy laid out how New Jersey is approaching the technology, including its partnership with Princeton on the NJ AI hub.
“We held hands and jumped into the AI space,” Murphy said of the state’s partnership with the University. Together with Microsoft and New Jersey-based AI company CoreWeave, the state and University launched the NJ AI Hub earlier this year to foster AI innovation. “I don’t think we’d be all in if we didn’t think that the probabilities were very high that a lot of good things could go right with AI, but I think we also have to acknowledge some of the tensions that are still playing themselves out.”
Murphy highlighted concerns about AI’s potential to empower bad actors, as well as its impact on human creativity, jobs, and equity.
“Is this going to be something that is a huge wealth generator for the few, or are we going to be able to give access to this realm to everybody,” he said.
One of the ideas attendees considered at the conference was building a public AI infrastructure that would ensure it remains an open-source technology, rather than becoming privately controlled by a few companies. Bringing AI into the public domain would also present an opportunity to build in controls and mechanisms for accountability, speakers noted. They argued that AI is foundational infrastructure, not unlike roads, bridges, and broadband.
At the end of the two-day gathering, Anne-Marie Slaughter, chief executive of the New America Foundation and former Princeton SPIA dean, reflected on the conference. She emphasized what others had said about needing to be transparent in how AI is used and ensuring that public trust in government is strengthened.
“[AI] doesn’t just transform how government does things better, faster, cheaper. It can transform what government does and, even more importantly, what government in a democracy is,” Slaughter said. “You can start to co-create and you can start to co-govern.”
Posing with Gov. Phil Murphy at the conference are (left to right) Cassandra Madison of the Center for Public Sector AI, CITP Director Arvind Narayanan, New Jersey Chief AI Strategist Beth Simone Noveck, Timothy Blute of the National Governors Association and Jeffrey Oakman, senior strategic AI Hub project manager at Princeton.
Tools & Platforms
How Trump’s megabill could slow AI progress in US
The elimination of federal renewable energy tax credits in Trump’s One Big Beautiful Bill Act has major implications for the global AI race.
Ultimately, the shift means slowing down US progress on new energy production, which is key to winning the technology Cold War with China. There is no possible way tech companies can power the massive rollout of AI factories without solar, and now it will be that much more expensive.
But the attempt to throw a lifeline to the fossil fuel industry could be too little, too late, as detailed in this New Yorker article by Bill McKibben. The rate of solar adoption is now about a gigawatt every 15 hours. A gigawatt is the output of a typical nuclear power plant.
Solar isn’t just cheaper than fossil fuels. It’s also faster to deploy, which is crucial in the AI race. The expansion of AI data centers is creating new economic incentives for innovation in renewables, from geothermal to fusion to new battery chemistries, which can store all that new solar power. It’s a topic I expect we’ll be covering more and more here in the coming months.
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers2 weeks ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions2 weeks ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained