Tools & Platforms
DXC Technology’s AI-Powered Tendia Solution Slashes Bid Writing Time for Ventia
DXC Technology Company (NYSE:DXC) is one of the cheap IT stocks hedge funds are buying. On July 3, DXC Technology announced the deployment of an AI-driven bid writing solution called Tendia for Ventia. Ventia is one of the largest essential infrastructure service providers in Australia and New Zealand.
The new platform significantly reduces the time required to draft initial bid responses for major infrastructure contracts, cutting it from days to minutes, thereby enhancing Ventia’s ability to quickly respond to complex and high-value tenders. The Tendia solution was developed in collaboration with DXC and was deployed in just 4 months.
An IT security specialist inspecting a corporate network server for any malicious activity.
It works by automating the time-consuming process of sourcing and synthesizing information from extensive document libraries. Tendia allows their teams to focus on higher-value work, deliver more accurate proposals, and respond more quickly to multi-million-dollar tenders.
DXC Technology Company (NYSE:DXC) provides IT services and solutions internationally.
While we acknowledge the potential of DXC as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you’re looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock.
READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now.
Disclosure: None. This article is originally published at Insider Monkey.
Tools & Platforms
U.S. State Courts Cautiously Approach AI Despite Efficiency Promises and Staffing Crises
A new survey of state courts reveals a striking paradox in the American judicial system: Even though courts face severe staffing shortages and operational strain, they remain reluctant to adopt generative artificial intelligence technologies that could provide significant relief.
The Thomson Reuters Institute’s third annual survey of state courts, conducted in partnership with the National Center for State Courts AI Policy Consortium, found that 68% of courts reported staff shortages and 48% of court professionals say they do not have enough time to get their work done.
Despite these pressures, however, just 17% say their court is using gen AI today.
Courts Under Strain
The survey, which gathered responses from 443 state, county, and municipal court judges and professionals between March and April 2025, paints a picture of courts under significant strain.
Seventy-one percent of state courts and 56% of county/municipal courts experienced staff shortages in the past year, with 61% anticipating continued shortages in the next 12 months.
This staffing crisis translates into demanding work schedules, with 53% of respondents saying they work between 40 and 45 hours a week on average, and an additional 38% working over 46 hours a week.
Perhaps most telling, only half of court professionals said they had enough time to get their work done.
These workload pressures are only getting worse. Nearly half of respondents (45%) reported an increase in their caseloads compared to last year and 39% said the issues they are dealing with have become more complex.
Meanwhile, 24% of respondents reported increases in court delays, compared to 18% who reported decreases.
AI Adoption Remains Limited
Against this backdrop of operational strain, the survey reveals a cautious approach to AI adoption that seems at odds with the technology’s potential benefits.
Currently, only 17% of respondents said their court was using gen AI, and an additional 17% said their court was planning to adopt gen AI technology over the next year.
This slow adoption occurs despite widespread recognition of AI’s transformative potential, with 55% of respondents rating AI and gen AI as having a transformational or high impact on courts over the next five years.
The survey found that AI and gen AI is the highest-ranking impactful trend, rated as transformational or high impact by 55% of respondents.
Court professionals clearly see the efficiency benefits AI could provide. Court professionals predict that in the next year, gen AI will help them save an average of nearly three hours a week, rising to nearly nine hours a week within five years.
The projected time savings could be substantial: Respondents estimate they will save an average of nearly three hours every week in the next year, growing to nearly six hours each week within three years and 8.8 hours each week within five years.
Barriers to AI Implementation
So what is keeping courts back? The survey identifies several factors contributing to courts’ cautious AI adoption.
Seventy percent of respondents said their courts are currently not allowing employees to use AI-based tools for court business, and 75% of respondents said their court has not yet provided any AI training.
There are also varied but significant concerns about AI implementation.
More than a third (35%) are worried that AI will lead to an overreliance on technology rather than skill, while a quarter have concerns about malicious use of AI, such as counterfeit orders and evidence. Interestingly, only 9% were worried about widespread job loss resulting from AI.
Budget constraints may also play a role in limiting technology adoption. The survey found that 22% say their budget for the next year increased, while 30% said budgets decreased, and 30% say budgets stayed the same.
Current Technology Landscape
While AI adoption lags, courts have made progress implementing other technologies. Most courts have adopted key technologies, including case management (86%), e-filing (85%), calendar management (83%), and document management (82%).
Video conferencing has reached near-universal adoption at 88%.
However, some technology gaps remain. Beyond gen AI, the most common technologies set to be adopted next are legal self-help portals, online dispute resolution and document automation.
Virtual Hearings Widely Adopted
The survey shows significant adoption of virtual hearings, with 80% of respondents saying their court conducts or participates in virtual hearings.
In more than 40% of all jurisdictions, virtual hearings are available for first/initial appearances, preliminary/status hearings and/or motion hearings.
Virtual hearings appear to improve court efficiency in some areas. 58% of respondents reported that virtual courts decrease failure to appear rates, and 84% reported that virtual courts increase access to justice.
However, the digital divide presents ongoing challenges. Nearly one in five respondents (19%) feel that the majority of litigants are experiencing decreased access to justice because they lack strong technology skills.
Court access for people with lower digital literacy and fewer technical support resources were ranked as the top challenges for litigants involved in virtual hearings.
Cybersecurity Concerns
As courts increasingly rely on technology, cybersecurity emerges as a critical concern. The survey reveals significant variation in confidence levels regarding IT security.
While 57% of respondents feel highly confident in their IT systems’ security, an alarming 22% of respondents say they are “not at all confident” in the security of their IT systems.
Generational Workforce Changes
The survey identifies generational workforce shifts as another major factor affecting courts. Baby Boomers and Gen Xers exiting the workplace, along with Gen Zers entering the workforce and Millennials moving into leadership positions, are trends frequently ranked as transformational or high impact.
These demographic changes have important implications for technology adoption. As the report notes, Gen Zers are digital natives who are very comfortable using technology and may find it easier to manage automated workflows, while they may be resistant to jobs and tasks that still rely heavily on manual tasks.
Reducing Operational Errors
The survey provides insights about task efficiency and error rates in court operations.
Entering and updating data in court management systems was rated as both the most error-prone task by a wide margin and also as the second-most inefficient task. This finding suggests that greater use of automation in CMS entry could yield major improvements in both efficiency and error rates.
The survey also found correlations between different operational challenges. Tasks that are more stressful are also correlated with causing inconvenience for court users, suggesting that addressing workflow inefficiencies could simultaneously improve both staff satisfaction and user experience.
A Critical Juncture for Courts
The survey suggests that courts face a strategic choice: embrace AI technologies that could significantly alleviate operational pressures, or risk falling further behind as staffing challenges intensify and workloads continue to grow.
“We’re facing challenges — staff don’t think they have enough time to meet their demands, and they’re working more hours to get the work done, and that’s leading to burnout,” said David Slayton, executive officer and clerk of court for the Superior Court of Los Angeles County.
“It’s incumbent on court leaders to really think about how technology can help us with this problem.”
Mike Abbott, head of Thomson Reuters Institute, underscored the urgency of the situation.
“Courts are facing an unprecedented convergence of change, driven by generative AI and generational shifts in their workforce, at the same time as they continue to deal with staff shortages, backlogs and delays,” Abbott said.
“AI literacy can empower the courts to understand both the risks and the opportunities associated with the technology, enabling them to identify the best use cases which help them focus on higher value work.”
Tools & Platforms
Schools using AI to personalise learning, finds Ofsted
Personalisation is just one of the ways education providers are experimenting with artificial intelligence (AI), according to a report from the Office for Standards in Education, Children’s Services and Skills (Ofsted).
When looking into early adopters of the technology to find out how it’s being used, and assess the positives and challenges of using AI in an educational setting, there were some cases where AI was used to assist children who may need extra help due to life circumstances with a view to levelling the playing field.
“Several leaders also highlighted how AI allowed teachers to personalise and adapt resources, activities and teaching for different groups of pupils, including, in a couple of instances, young carers and refugee children with English as an additional language,” the report said.
These examples relate to one school using AI to translate resources for students whose first language isn’t English, and another turning lessons and resources into podcasts for young caregivers to help them catch up on things they’ve missed.
Other use cases for personalisation included using AI to mark work while giving personalised feedback, saving the teacher time while also offering specific advice to students.
Government push
In early 2025, the UK’s education secretary, Bridget Phillipson, told The Bett Show the current government plans to use AI to save teachers time, ensure children get the best education possible, and grow the connection between students and teachers.
But research conducted by the Department for Education to gauge teachers’ attitudes to the technology found many are wary. Half of teachers are already using generative artificial intelligence (GenAI), according to the research, but 64% of the remaining half aren’t sure how to use it in their roles, and 35% are concerned about the many risks it can pose.
Regardless of teacher attitudes, the government is leaning heavily into using AI to make teachers’ lives easier, making plans to invest £4m into developing AI tools “for different ages and subjects to help manage the burden on teachers for marking and assessment”, among many other projects and investments.
The Department for Education (DfE), which also commissioned Ofsted’s research into the matter, has stated: “If used safely, effectively and with the right infrastructure in place, AI can ensure that every child and young person, regardless of their background, is able to achieve at school or college and develop the knowledge and skills they need for life.”
Use cases and cautions
Early in 2025, the government launched its AI opportunities action plan, which includes how the Department for Science, Innovation and Technology (DSIT) aims to use AI to improve the delivery of education in the UK, with DSIT flagging potential uses such as lesson planning and making admin easier.
In some cases, this is exactly what schools and colleges were using it for, according to Ofsted’s research – many were automating common teaching tasks such as lesson planning, marking and creating classroom resources to make time for other tasks; others were using AI in lessons and letting children interact with it.
Other schools had already started developing their own AI chatbots, and though no solid plans were yet in place, there were hopes of integrating the technology into the curriculum in the future.
But implementing AI has required careful consideration, with the report highlighting: “AI requires skills and knowledge across more than one department.”
Each school and college Ofsted spoke to were in different stages of AI adoption, as well as teachers and students having varying levels of understanding of how best to use the technology.
Pace of adoption also varied, though most schools seemed to be taking an incremental approach to adoption, changing bit by bit as teachers and students experiment and accept new ways of working using AI technology. The report claimed there didn’t seem to be a “prescriptive” approach about what tools could be used.
An “AI champion” existed in most cases, namely someone responsible for implementing and getting others on board with adoption – usually someone who has prior knowledge of the technology in some capacity.
A college principal of one the education providers Ofsted spoke to said: “I think anybody who’s telling you they’ve got a strategy is lying to you because the truth of the matter is AI is moving so quickly that any plan wouldn’t survive first contact with the enemy. So, I think a strategy is overbaking it. Our approach is to be pragmatic: what works for the problems we’ve got and what might be interesting to play with for problems that might arise.”
When children are involved, safeguarding should be at the forefront of any plans to implement new technologies, which is one of the reasons those running pilots and introducing AI are being so cautious.
Those Ofsted spoke to already displayed knowledge about the risks of using the technology, such as “bias, personal data, misinformation and safety”, and many had already developed or were adding to AI policies and best practices.
The report said: “A further concern is the risk of AI perpetuating or even amplifying existing biases. AI systems rely on algorithms trained on historical data, which may reflect stereotypical or outdated attitudes…
“However, some of the specific aspects of AI, such as its ability to predict and hallucinate, and the safeguarding issues it raises, create an urgent need to assess whether intended benefits outweigh any potential risks.”
There have been other less commonly mentioned concerns for some schools, for example, where AI is being used for student brainstorming or individualised marking, there is the possibility of narrowing what is counted as correct, taking away some of the “nuance and creativity” from how students can answer questions and tackle problems.
“Deskill[ing]” teachers and making it harder for children to learn certain skills because of a reliance on AI was also mentioned as something education providers are worried about.
Getting it right
Ultimately, AI adoption will be an ongoing process for education providers, and it’s important senior leaders are on board, with someone in charge of introducing and monitoring the technology’s impact on teaching and education delivery.
The most vital piece of the puzzle, according to Ofsted, is ensuring teachers are guided and supported rather than put under pressure, as well as guaranteeing transparency surrounding anything AI is used for in schools.
“There is a lack of evidence about the impact of AI on educational outcomes or a clear understanding of what type of outcome to consider as evidence of successful AI adoption,” the report said. “Not knowing what to measure and/or what evidence to collect makes it hard to identify any direct impact of AI on outcomes.
“Our study also indicates that these journeys are far from complete,” it continued. “The leaders we spoke to are aware that developing an overarching strategy for AI and providing effective means for evaluating the impact of AI are still works in progress. The findings show how leaders have built and developed their use of AI. However, they also highlight gaps in knowledge that may act as barriers to an effective, safe or responsible use of AI.”
Tools & Platforms
State AI leaders gather at Princeton to consider how the technology can improve public services
Much of the news about artificial intelligence has focused on how it will change the private sector. But all around the country, public officials are experimenting with how AI can also transform the way governments provide essential services to citizens while avoiding pitfalls.
State AI leaders, including Gov. Phil Murphy of New Jersey, gathered at Princeton University in June to discuss how AI offers ways for government to be more efficient, effective, and transparent, especially at a time when budgets are strapped and economic uncertainty has slowed down hiring.
Hosted by Princeton’s Center for Information Technology (CITP), the NJ AI Hub, the State of New Jersey, the National Governors Association, the Center for Public Sector AI, GovLab, and InnovateUS, the conference brought together more than 100 AI leaders from 25 states to share ideas and collaborate. The meeting was conducted under an agreement of confidentiality to allow participants to discuss progress and concerns openly. Quotations in this story are used by permission.
What emerged was enthusiasm about AI’s potential to reduce the time government employees spend on manual tasks and improve their ability to engage citizens, as well as concerns about how best to use public data to innovate and increase equity rather than undermine it.
The gathering is just one of the ways that CITP – which is a joint center of the Princeton School of Public and International Affairs and Princeton Engineering – is leading on AI. The center also holds policy precepts to engage policymakers in AI governance at the SPIA in DC Center, and several affiliated faculty teach courses on AI policy at Princeton SPIA.
“There’s a clear recognition of the need for thinking about public accountability and equity,” said Princeton’s Arvind Narayanan. “At the same time, I think there’s also recognition of the potential for governments if we get this right.”
At the conference, CITP Director Arvind Narayanan noted that attendees were focused on practical implementation of AI tools rather than the “polarizing conversations around AI that dominate the media.” He also explained why public-facing deployments of AI by state governments have been slower than internal ones.
“There’s a clear recognition of the need for thinking about public accountability and equity. At the same time, I think there’s also recognition of the potential for governments if we get this right,” said Narayanan, who is also a professor of computer science and co-author of “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.”
Speakers shared big and small ways that AI is improving government. Some noted saving an hour or two a week per employee by leveraging AI to help draft grant applications, assess legislation, or review procurement policies while ensuring oversight and accuracy. One city automated the summarization of council oral votes, a task that was previously completed by a city clerk, creating summaries of 20 years of council books in a short period of time at nearly zero cost. As a result, voters have a simpler way to access information and hold elected officials accountable.
In his remarks, Gov. Phil Murphy laid out how New Jersey is approaching the technology, including its partnership with Princeton on the NJ AI hub.
“We held hands and jumped into the AI space,” Murphy said of the state’s partnership with the University. Together with Microsoft and New Jersey-based AI company CoreWeave, the state and University launched the NJ AI Hub earlier this year to foster AI innovation. “I don’t think we’d be all in if we didn’t think that the probabilities were very high that a lot of good things could go right with AI, but I think we also have to acknowledge some of the tensions that are still playing themselves out.”
Murphy highlighted concerns about AI’s potential to empower bad actors, as well as its impact on human creativity, jobs, and equity.
“Is this going to be something that is a huge wealth generator for the few, or are we going to be able to give access to this realm to everybody,” he said.
One of the ideas attendees considered at the conference was building a public AI infrastructure that would ensure it remains an open-source technology, rather than becoming privately controlled by a few companies. Bringing AI into the public domain would also present an opportunity to build in controls and mechanisms for accountability, speakers noted. They argued that AI is foundational infrastructure, not unlike roads, bridges, and broadband.
At the end of the two-day gathering, Anne-Marie Slaughter, chief executive of the New America Foundation and former Princeton SPIA dean, reflected on the conference. She emphasized what others had said about needing to be transparent in how AI is used and ensuring that public trust in government is strengthened.
“[AI] doesn’t just transform how government does things better, faster, cheaper. It can transform what government does and, even more importantly, what government in a democracy is,” Slaughter said. “You can start to co-create and you can start to co-govern.”
Posing with Gov. Phil Murphy at the conference are (left to right) Cassandra Madison of the Center for Public Sector AI, CITP Director Arvind Narayanan, New Jersey Chief AI Strategist Beth Simone Noveck, Timothy Blute of the National Governors Association and Jeffrey Oakman, senior strategic AI Hub project manager at Princeton.
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers2 weeks ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions2 weeks ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained