Connect with us

AI Research

Jus Mundi Launches Jus AI 2: ‘Breakthrough’ Legal AI Combines Agentic Reasoning with Research Control

Published

on


Calling it a breakthrough that “sets a new standard for AI-powered legal research quality,” Jus Mundi, the AI-powered research platform for international law and arbitration, today announced the launch of Jus AI 2, the second generation of the AI assistant it launched last year, saying it provides the solution to one of legal AI’s persistent challenges: the trade-off between speed and research quality control.

The Paris-based company claims its second-generation AI assistant eliminates the need for legal professionals to choose between fast AI responses and methodical research oversight — a compromise, it says, that has plagued many legal AI tools since their emergence.

Jus AI 2’s architecture centers on what the company calls an “AI planning agent” that creates multi-step research strategies tailored to specific queries. The system can analyze up to 75,000 documents per minute, drawing from Jus Mundi’s database of awards, treaties, rules and partnerships with major arbitral institutions including the ICC, AAA-ICDR, and HKIAC.

The platform integrates what Jus Mundi terms “Fusion Technology” — combining probabilistic agentic AI reasoning with deterministic search functions. This hybrid approach aims to provide deeper research capabilities while maintaining the precision that legal professionals require.

According to the company, Jus AI 2 achieves a 125% improvement in retrieval relevancy compared to traditional legal research tools.

“The solution’s new AI planning agent creates a multi-step research plan tailored to each query,” the company said in an announcement. “It then analyses up to 75,000 documents per minute, identifying the most relevant legislations, precedents, and publications from all over the world. Finally, it delivers a structured, actionable answer, with transparent reasoning and specific source documents.”

Emphasis On Transparency

Jean-Rémi de Maistre, Jus Mundi’s CEO and co-founder, says that Jus AI 2 addresses the legal profession’s continuing concerns about AI reliability.

“While AI is probabilistic by nature, arbitration professionals need certainty when million-dollar disputes hang on procedural nuances,” he said. “Jus AI 2 eliminates the compromise between speed and control. For the first time, practitioners can leverage agentic AI reasoning while maintaining complete oversight of their research sources.”

The platform emphasizes transparency through detailed documentation of its reasoning steps and clear citations, the company says, making it easy to verify and trust the insights delivered.

Jus Mundi has built broad acceptance within the arbitration community, serving over 650 arbitration teams, including at major firms such as Freshfields, A&O Shearman, White & Case, and Quinn Emanuel.

The company says it has invested heavily in security certifications, operating within ISO 27001 and SOC II frameworks. Its Jus AI 2 carries ISO 42001 certification, an international framework for responsible AI governance.

Beyond Arbitration?

You can see Jus AI 2 in action during a live webinar on Sept. 11, 2025, at 10 a.m. ET, presented by CEO de Maistre and the company’s head of data and AI Ayushman Dash. To join the webinar, register here.

Meanwhile, the company seems to be hinting at ambitions beyond the sphere of international arbitration.

“This breakthrough goes beyond arbitration,” de Maistre said in the announcement. “We’ve proven that Agentic AI-powered legal research can meet the quality standards legal professionals actually need. What started as arbitration intelligence is now the foundation for legal intelligence across every practice area.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Microsoft and OpenAI reach agreement on… something – Computerworld

Published

on


“OpenAI’s decision to recast its for-profit arm as a public benefit corporation while keeping control in the hands of its nonprofit parent is without precedent at this scale,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “It is a structure that fuses two competing logics: the relentless need to raise capital for models that cost billions to train, and the equally strong pressure from regulators, investors, and the public to demonstrate accountability.”

The proposed structure enables OpenAI to attract traditional investors who expect potential returns, while the nonprofit parent can ensure safety considerations aren’t sacrificed for profit. However, Gogia warned the hybrid model is “as much a gamble as it is an innovation,” noting concerns about “who carries liability if something goes wrong, whether fiduciary duty to investors will override social commitments when revenues are threatened.”

Charlie Dai, VP and principal analyst at Forrester, said the structure “could influence others because it balances capital access with mission-driven oversight” but warned that “regulatory scrutiny, lawsuits, and governance complexity introduce uncertainty around decision-making speed and long-term stability.”



Source link

Continue Reading

AI Research

How Artificial Intelligence Is Revolutionizing Workplace Ergonomics and Safety

Published

on


Ergonomic assessments can help organizations create safer work environments. The use of artificial intelligence in this field is poised to create even better results, if managed properly.

Workplace injuries continue to impose significant costs on businesses, with ergonomic-related injuries representing a substantial portion of workers’ compensation claims. As companies seek more efficient ways to protect their workforce, artificial intelligence is emerging as a transformative tool that can accelerate assessments while maintaining the human expertise that makes ergonomic programs effective.

“As an ergonomist, doing job task assessments took a long time. We saw AI as a tool that could help us speed up the back end, specifically the measuring and quantifying risk of awkward postures and forceful exertions,” said Michael White, Managing Director of Ergonomics at The Hartford.

“I think any ergonomist would agree that using these technologies now makes our lives way more efficient to screen for and visualize ergonomics-related exposures, and allows us to focus on what is important: helping customers or organizations improve work environments to make people healthier and less susceptible to injury.”

White’s team has integrated various AI tools into their assessment processes, discovering both significant opportunities and important limitations along the way.

The integration of artificial intelligence into ergonomic assessments varies significantly depending on the work environment. White explained that The Hartford takes different approaches for office versus industrial settings, with each presenting unique opportunities for AI implementation.

“At The Hartford, we’re working on different approaches for office ergonomics versus industrial ergonomics. We’ve focused more on the industrial space since that’s where the higher costs and exposures typically occur. However, for office environments, AI can be particularly effective,” White said.

Office environments offer more controlled conditions that make them ideal candidates for AI-powered assessments. Most office workstations feature adjustable chairs and standardized equipment, creating consistent parameters that AI systems can more easily evaluate.

“Office settings are more controllable compared to factories, warehouses, or distribution centers because most workstations have adjustable chairs and standardized equipment. We’re currently developing an AI-based office assessment tool with one of our vendors that captures images of the workspace,” White said.

This office assessment tool guides users through a series of questions to identify the positioning of keyboards, mice, monitors, and chair height. The AI then analyzes workspace photos to provide specific ergonomic recommendations, such as lowering monitors, raising chair height, or adjusting desk elevation.

“The goal is to create a hands-off approach where AI and machine learning can handle the assessment process independently, which is currently in development for office environments,” White said.

Carriers Crave Efficiency

For insurance companies like The Hartford, AI offers particular efficiency advantages. With hundreds of customers interested in ergonomic services and three to four ergonomists on staff, reaching every client through traditional on-site visits would be impossible.

Michael White, Managing Director of Ergonomics at The Hartford

“Virtual interactions have been a great way to be efficient, allowing us to reach as many people as possible. We can send a link to a customer, who then takes a couple of videos. These are sent back to our AI platform, enabling us to work with customers remotely without being on-site,” White said.

While AI offers significant efficiency gains, White emphasized that human oversight remains essential for effective ergonomic programs. Current AI technology cannot replace the critical thinking and problem-solving capabilities that experienced ergonomists bring to workplace assessments.

“You definitely cannot put 100% trust in AI at this point, and you need a second set of eyes or even a third set of eyes,” White said.

AI performs best in controlled, predictable environments where patterns repeat consistently. White noted that shipping and receiving areas represent ideal applications for AI-powered assessments because they involve repeatable processes that AI systems can learn to evaluate effectively.

“In more controlled environments, such as a shipping receiving area, AI can be useful. Most companies have a loading dock where products are boxed and loaded onto trucks for shipping—a very repeatable process. For scenarios like this, AI could be a valuable tool because it sees the same patterns repeatedly and can generate relevant solutions or recommendations,” White said.

However, in unpredictable or highly variable work environments, human expertise becomes even more critical. Professional ergonomists must review AI outputs and provide on-site or virtual guidance for complex situations that fall outside typical patterns.

Perhaps most importantly, AI cannot replicate the human connection that drives successful ergonomic programs. White emphasized that employee feedback and buy-in remain crucial elements that technology cannot replace.

“A good consultant always considers the person doing the job. In many cases, if I’m on-site, I will physically do the job that a person is doing to understand and relate to them,” White said. “We always seek to ask those folks doing the job for solutions because if you’re doing something day in and day out, you likely have ideas about what would make this easier on your body.”

“That’s where I see a gap with AI that may never be filled—that person-to-person interaction. I think this direct human connection is very critical in what we do, both in getting stakeholder buy-in to make workplace improvements and more simply relating to the average worker’s day-to-day,” White said.

Expanding Safety Impact Beyond Ergonomics

The computer vision technology that powers ergonomic assessments can identify a much broader range of workplace safety issues, making AI a valuable tool for comprehensive safety programs. Companies with existing CCTV systems can leverage this infrastructure to monitor various hazards beyond ergonomic risks.

“The AI tools we’re using, particularly computer vision tools, aren’t just specific to ergonomics; they tie into broader safety programs. If a company has CCTV cameras monitoring their shop floor or warehouse, which most do now, the technology can identify various safety issues beyond ergonomics,” White said.

These systems can detect forklift incidents, slip-trip-falls, and other dangerous behaviors that might otherwise go unnoticed until an accident occurs. White’s team has worked with companies whose AI systems identified employees performing unsafe behaviors such as doing donuts in forklifts, climbing on energized machinery, ducking under prohibited conveyor systems, and coming into contact with hot electrical components.

“There’s a wide range of safety elements that AI can help monitor and improve, with ergonomics being just one piece of that broader safety picture,” White said.

Wearable technology adds another layer of real-time safety coaching through haptic feedback systems. These devices can provide immediate alerts when workers assume potentially harmful postures.

“An interesting benefit we’ve seen with wearables, based on our own anecdotal evidence, is postural improvement through haptic feedback. When someone wears a sensor like a belt clip and bends too far, the device vibrates to alert them they’re bending improperly and should consider lifting differently,” White said.

The coaching capability of these devices creates autonomous safety guidance without requiring constant human supervision. White’s team has documented measurable improvements in lifting behaviors after implementing wearables with haptic feedback features.

“We’ve seen pre- and post-implementation results showing that when the belt clip vibrates, users don’t bend at the waist as far. It’s coaching them without having someone there telling them all the time. This autonomous coaching capability makes the technology particularly valuable,” White said.

The personalization capabilities of AI represent perhaps its greatest long-term potential for workplace safety. Rather than applying one-size-fits-all approaches, AI can identify individual workers who may be at higher risk and provide targeted interventions.

“I think it’s really just allowing more personalization. It goes away from the one-size-fits-all approach, which is still needed,” White said. “Traditional safety elements remain essential—everyone needs their steel toes, safety glasses, height-adjustable chairs and surfaces, and safe lift training. But AI is just harnessing data to help safety professionals make better decisions.”

“This technology might allow you to hone in on a specific worker’s risk profile and provide a coaching opportunity to say, ‘I noticed your assembly station is a little too low. Did you know your desk was adjustable? Let’s raise this up so that you’re not hunching forward and extending your elbows,’” White said.

As AI continues to evolve, White expects increasing adoption across industries, with policies likely emerging to address privacy concerns while maximizing safety benefits. Some of The Hartford’s customers have already purchased AI safety systems for independent use after successful pilots, demonstrating growing confidence in the technology.

“We’ve had some customers who, after piloting these technologies with us, have independently purchased them for in-house use. This gives us better opportunities to collaborate with them and helps us manage our time more effectively. Empowering our customers with tools to better manage risk in-house not only benefits the customer and The Hartford, but society also benefits with a healthier workforce,” White said. &



Source link

Continue Reading

AI Research

Navigating the concerns of AI

Published

on


Ethical concerns

The ethical concerns that many professionals express focus on biased outputs and data privacy. 

Bias. AI generates outputs based on algorithms that human beings develop and on information that human beings provide. But if an AI development team isn’t careful about how it trains its machine-learning protocols, the tool may generate outputs that favor one outcome over another. Those biases can render an AI system unreliable. This is, of course, a significant worry for law and tax professionals, who require access to utterly trustworthy legal and regulatory information.

Hallucinations. AI models that aren’t carefully developed may be vulnerable to hallucinations — outputs that deliver misinformation. This is probably one of the reasons why 50% of report respondents said that a lack of “demonstrable accuracy of AI-powered technologies” was a major barrier to their organization’s investment in these tools.

Data security. To provide reliable outputs, AI systems require access to large amounts of data, including sensitive personal information. But this understandably raises concerns about privacy violations, as security vulnerabilities can expose a company to financial penalties, legal difficulties, and reputational damage. 

Among those surveyed in the report, 42% cited a lack of demonstrable security as a barrier to AI investment in their organizations. Many professionals worry that AI systems might compromise sensitive data and make it publicly available. Professional organizations must ensure that they are conforming to data protection regulations and search for AI tools that prioritize data security.

Strategic concerns

Nearly two-thirds (65%) of respondents who have personal AI goals say they aren’t aware of their organization having an AI strategy. More generally, only 22% have a visible strategy. This disconnect can result in a lack of guidance, causing individual professionals to be inconsistent, inefficient, and even unintentionally unethical in their use of AI.

It’s also worth noting that 38% of professionals working for organizations that do have an AI strategy also reported that they don’t have any personal goals for AI adoption. Lacking objectives increases the risk that the organization won’t effectively implement its AI strategy.

Yet another concern that respondents noted in the report is that AI tools might be so good that organizations might rely too heavily on their use. They fear that this overreliance could hinder professional development, particularly when it comes to building the necessary skills for using AI effectively and ethically going forward. This technology is constantly evolving, after all.

Building AI literacy and professional resilience

Ethical and strategic concerns of AI are reasonable worries. But by addressing those concerns, professionals and the organizations they serve can develop competitive advantages in their market.

Strategic training leads to more successful adoption

Professionals are well aware that rapid technological advancements, evolving business needs, and shifting workforce demographics are constantly changing the way they conduct their practices. Professionals who can adopt AI systems effectively will gain a competitive edge, boosting both their personal impact and their organization’s long-term value.



Source link

Continue Reading

Trending