Connect with us

Tools & Platforms

Bloomberg Law Hosts AI Symposium Exploring the Future of Legal Technology

Published

on


ARLINGTON, Va., July 8, 2025 /PRNewswire/ — Bloomberg Law hosted its inaugural “Law, Language, and AI Symposium” on June 9, 2025. This event convened thought leaders from academia, the legal profession, and the technology sector to explore the impact of AI on legal practice and research, underscoring Bloomberg Law’s mission to enhance the practice of law through cutting-edge, customer-centric technology. 

Designed to facilitate the converging worlds of legal scholarship and artificial intelligence, the symposium aimed to bridge the gap between theory and practical applications by offering a platform for dialogue on the evolving role of AI in the legal profession.

The event showcased research from distinguished scholars and practitioners addressing topics such as natural language processing for legal analytics, the integration of legal theory with AI, and practical approaches to ethics in legal technology. Selected researchers presented 20-minute talks and engaged in collaborative discussions, setting the foundation for future developments in AI-powered legal solutions and ethical AI practices. Presentations covered a diverse range of topics, highlighting both theoretical and applied advances in legal AI, organized into three categories: AI for Transactional Practice and Document Automation; Reasoning Frameworks and Explainability in Legal AI; and Evaluation, Benchmarks, and Access to Justice.

  • At the Clause-Level: Bringing Predictive Analytics to Transactional Lawyering by Rebecca B. Pasternak (Mayer Brown)
  • Multi-Agent LLM Platform for Legal Document Preparation by Tong Liang (Dynosaur Technology)
  • Access to Justice for All using AI: A prototyping Working Group at Stanford University Center for Legal Informatics (CodeX) by Bruce B. Cahan and Yen Kha (Urban Logic)
  • Numerical Reasoning in Legal and Financial Texts: A QA-Based Benchmark Using SEC Filings by Mark Klaisoongnoen and Claire Barale (EPCC – The University of Edinburgh)
  • Using Automated Reasoning for Legal Reasoning by Ruzica Piskac and Scott Shapiro (Yale University)
  • Neurosymbolic Argumentation Models: Semi-Structured Legal Reasoning with CRiSP by Sarah Santos, Anmol Singhal, Travis Breaux and Thomas Norton (Carnegie Mellon University)
  • Towards Smart(er) Wills: Using AI to Bridge Natural Language and Smart Contracts by Alice Saebom Kwak, Muaz Ali, Mihai Surdeanu, Clayton T. Morrison, Saumya Debray and Derek E. Bambauer (University of Arizona)
  • RASOR: Contextual Legal Intelligence via Rationalized Selection and Refinement in RAG by Yash Saxena, Ankur Padia, Swati Padhee, Manas Gaur and Srinivasan Parthasarathy (University of Maryland Baltimore County)
  • LLMs as Legal Knowledge Bases: Evaluating LLM Knowledge in Overruling Legal Tasks by Larissa Mori and Mario Ventresca (Purdue University)

Several broad themes emerged from presentations and discussions, from how AI tools will enable practical application in the legal field to how we need to address the problems that AI models still face, especially when dealing with complex legal data.

Megan Ma, executive director of the Stanford Legal Innovation through Frontier Technology Lab at Stanford Law School, delivered a compelling keynote exploring groundbreaking research on multi-agent personas within the legal profession. She highlighted how this approach could transform legal AI tools by enabling greater customization and utility. Ma discussed innovative ways to shape AI agents to mirror the expertise of specific legal professionals, allowing for the creation of documents in a particular author’s style or tailored to reflect the perspective of a plaintiff or defendant.

Bloomberg Law has long been focused on applying AI in ways that deliver real value to legal professionals,” said Bobby Puglia, chief product officer, Bloomberg Industry Group. “We were deeply impressed by the exceptional quality of work submitted for this symposium and are honored to convene a group that shares our commitment leveraging AI to improve the accuracy, efficiency, and overall effectiveness of the legal process.”

For more information on the symposium, please visit https://aboutblaw.com/biNJ.

About Bloomberg Law
Bloomberg Law combines the latest in legal technology with workflow tools, comprehensive primary and secondary sources, trusted news, expert analysis, and business intelligence. For more than a decade, Bloomberg Law has been a trailblazer in its application of AI and machine learning. Bloomberg Law’s deep expertise and commitment to innovation provide a competitive edge to help improve attorney productivity and efficiency. For more information, visit Bloomberg Law.

SOURCE Bloomberg Law



Source link

Tools & Platforms

UN AI summit accused of censoring criticism of Israel and big tech over Gaza war

Published

on


A prominent AI scientist says she was pressured by the organisers of the UN’s flagship conference on AI to censor parts of her presentation that criticised Israel over its war in Gaza and the role of tech giants – hours before taking the stage.

Abeba Birhane, named one of Time magazine’s 100 most influential people in AI in 2023, had been invited to speak at the opening ceremony of the AI for Good Summit, hosted this week in Geneva by the International Telecommunication Union (ITU). But two hours before her address, she was summoned to what she described as an “intense negotiation” during the rehearsal session with the event organisers, the Ethiopian-born researcher told Geneva Solutions.

Over the course of more than an hour, she says she was instructed to remove slides and change words referencing “Israel”, “Palestine” and “genocide” as well as any mention of technology companies, particularly Microsoft, Meta and Google in connection to war crimes or illegal activities.  A review of the original presentation by Geneva Solutions confirms that the version she used on the main stage at Palexpo is significantly different from the one initially submitted.

“I had uploaded my slides a week in advance,” said Birhane. “They had plenty of time to raise concerns. But instead, they waited until right before my keynote. It felt extremely stressful to experience censorship in real time,” she added.  Meredith Whittaker, the president of Signal, was in the room at the time of the incident and confirmed Birhane’s version of events.

The ITU didn’t confirm or deny what happened behind the scenes but in an email to Geneva Solutions explained that “all speakers are welcome to share their personal viewpoints about the role of technology in society”.

The annual summit has become the UN’s main forum for showcasing how artificial intelligence could be a force for good. But Birhane, who leads a team of researchers at the AI Accountability Lab at Trinity College Dublin and has served on the UN secretary general’s AI advisory body, does not share this view and came to Geneva to challenge it.

She had planned to use her keynote to question the growing use of “AI for social good” as a rhetorical shield by major tech firms. Her original presentation included publicly available documentation about Google and Microsoft’s contracts with the Israeli Ministry of Defence, as well as reports alleging that Meta trained its AI systems on pirated books. These examples, she argued, highlight the contradictions at the heart of the “AI for Good” premise. Microsoft has been a regular sponsor of the the event, while Mega has a booth to present its smart glasses.

“It’s a way of laundering accountability,” she said. “While companies claim to advance human rights and sustainability, they’re also supplying the technological infrastructure that is powering oppression, surveillance and, in some cases, atrocities.” Her comments come after a report last week by the UN special rapporteur on the rights situation in the Occupied Palestinian Territories, Francesca Albanese, named dozens of companies implicated in supporting Israeli settlements and in the war in Gaza.

The term “genocide,” Birhane was told, had to be replaced with the more diplomatic “war crimes.” One slide titled No AI for War Crimes drew particular concern from the organisers.

“It was where I drew the line,” she said. “I had already removed so much, but that slide was the core of my message. I refused to remove it.”

ITU told Geneva Solutions that it provided guidelines encouraging speakers to focus on meaningful issues and thoughtful, solution-oriented dialogue, and that all speakers were invited to rehearsal sessions to discuss their speeches with programme representatives prior to presentations.

“In the end,” wrote Birhane on BlueSky, “it was either remove everything that names names (big tech particularly) and remove logos or cancel the talk or turn it into a fireside chat without visuals.”

Researchers and NGO accusing the UN of making certain topics taboo has become more common in recent years. As reported by Geneva Solutions, UNFCCC, the UN climate body, faced accusations of censorship during Cop28 in the United Arab Emirates after it revoked the event badges of NGOs protesting over the situation in Palestine.

Birhane was the only speaker from an African country out of a long list of prominent tech entrepreneurs and political figures speaking at the opening event on Tuesday, including Amazon vice president Werner Vogels and Swiss federal councillor Guy Parmelin. She says her critical positions on AI as well as on the war in Gaza are well known. “I wish people would do their due diligence on my research and values before inviting me and getting surprised when they discover what I plan to talk about.”



Source link

Continue Reading

Tools & Platforms

“The distinction between AI startups and non-AI startups will disappear entirely”

Published

on


“At Magenta, we see AI not as a passing trend but as a foundational layer that will underpin the next generation of category-defining companies,” explained Ran Levitzky, General Partner at Magenta Venture Partners. “While the initial wave focused on core models and horizontal capabilities, we believe the next phase will be led by applied AI companies that embed intelligence deeply into products, solve specific and valuable problems, and show clear paths to monetization and defensibility.”

The firm joined CTech for its VC AI Survey, where venture capital companies are invited to share insights on artificial intelligence and its expected impact on every aspect of the sector and industry. It is focused on teams that treat AI as a strategic enabler, not just a feature, and ‘who combine technical excellence with sharp execution and commercial discipline’.

1 View gallery

Ran Magenta

Ran Levitzky

(Photo: Magenta Venture Partners)

“In the coming years, the distinction between AI startups and non-AI startups will disappear entirely,” he added. “The winners will be those who know how to build AI-native products that scale, deliver measurable value, and adapt fast in a rapidly evolving ecosystem. Israel, with its unique mix of talent, resilience, and global ambition, is well-positioned to lead in this transformation.”

Fund ID
Name and Title: Ran Levitzky, General Partner
Fund Name: Magenta Venture Partners
Founding Team: Ran Levitzky, Ori Israely, Mitsui & Co.
Founding Year: 2019
Investment Stage: Series A
Investment Sectors: AI, FinTech, Cyber, Mobility, Healthcare, Supply Chain, Vertical SaaS, Enterprise Software

On a scale of 1 to 10, how has AI impacted your fund’s operations over the past year – specifically in terms of the day-to-day work of the fund’s partners and team members?

7 – We leverage AI across our entire workflow. Our custom GPT acts as a virtual agentic associate, helping assess companies in our dealflow and evaluate potential investments. We apply AI to analyze the environments surrounding our portfolio companies, enabling us to deliver deeper strategic value. AI copilots assist in identifying trends across industry benchmarks, business models, and other relevant signals. We also use generative AI for content creation, including social media, investor updates, and broader communications.

Have you already had any significant exits from AI companies? If so, what were the key characteristics of those companies?

It’s still early for us to see a full exit from a pure AI company, but many of our portfolio companies have already embedded AI into their core strategy and are demonstrating clear business impact. AI capabilities are driving new monetization opportunities through enhanced product tiers and improving margin profiles across several sectors. We’re also seeing stronger sales efficiency, shorter sales cycles, and improved customer retention. Notably, companies leveraging AI effectively are showing a meaningful increase in ARR per employee, reflecting both operational leverage and disciplined execution. These companies share a strong alignment between AI use cases and real customer needs, coupled with product-led teams that move quickly and prioritize measurable business outcomes.

Is identifying promising AI startups different from evaluating companies in your more traditional investment domains? If so, how does that difference manifest?

Yes, evaluating AI startups is meaningfully different, especially when considering the product, competitive positioning, and the founding team’s ability to turn AI into a lasting advantage. We look at whether AI is core to the product’s differentiation and if it creates a moat through performance, user impact, or speed of execution that cannot be easily copied. We assess how well the team can design and evolve AI-driven features that are deeply integrated into the product experience, not just layered on top. There is also a clear distinction between evaluating foundational AI infrastructure companies and AI-enabled vertical SaaS companies as each demands a different lens in terms of scalability, go-to-market, and defensibility.

Over time, we believe the term “AI startup” will become irrelevant, as every successful company will need to be AI-native at its core. The real question will shift from whether a startup uses AI, to how intelligently and strategically it does so.

What specific financial performance indicators (KPIs) do you examine when assessing a potential AI company? Are there any AI-specific metrics you consider particularly important?

When assessing a potential AI company at the Series A stage, we focus on core financial indicators like revenue growth, gross margin potential, customer retention, and sales efficiency, while recognizing that many of these may still be in early stages. What matters most is how AI is expected to influence these metrics over time. We pay close attention to the assumptions around how AI will drive monetization, support pricing strategy, or create stickiness through differentiated outcomes.

For AI-specific considerations, we look at early signals such as adoption and usage rates of AI features, and how those are projected to impact conversion, expansion, or retention. We also examine the cost and scalability of delivering AI-driven value, including inference or infrastructure costs relative to the unit economics. While some data may still be directional at this stage, we look for a clear, credible path showing how AI moves the business forward in ways that are both measurable and defensible.

How do you approach the valuation of early-stage AI startups, which often lack significant revenues but possess strong technological potential?

When we evaluate early-stage AI startups, they typically have less than one million in ARR, so we place strong emphasis on team quality, product differentiation, and the strategic role AI plays in creating long-term value. We look for early signs of customer traction, whether through paid pilots, strong engagement, or clear willingness to pay, and assess how AI contributes to pricing power, retention, and overall business scalability.

Unlike in earlier hype cycles, we believe disciplined investors should still anchor valuation in reasonable multiples on actual or near-term revenue. While we recognize the long-term potential of breakthrough AI technology, we avoid inflated valuations that are unlikely to be justified by business performance. Our approach balances ambition with pragmatism, focusing on companies where strong technology is matched by clear commercial thinking and a realistic path to scale.

What financial risks do you associate with investing in AI companies, beyond the usual technological risks?

Beyond core technological risks, we see several financial risks that are particularly relevant to AI companies. One key area is infrastructure cost – AI workloads can be compute-intensive, and without careful architecture and optimization, high inference or training costs can erode margins as the business scales. Another risk is dependency on third-party models or platforms, where pricing changes, access restrictions, or policy shifts can materially impact unit economics and roadmap execution.

We also pay close attention to regulatory risk, especially in sectors like healthcare, finance, and defense, where AI-driven products may face long and uncertain validation cycles or compliance hurdles that delay revenue. In some cases, uncertainty around IP ownership or the use of third-party training data introduces legal exposure that could translate into financial liabilities. We underwrite these risks carefully, especially at the Series A stage, and prioritize companies that demonstrate a clear understanding of how to build AI-native products with sound business foundations.

Do you focus on particular subdomains within AI?

We focus on applied AI opportunities where the technology delivers a tangible product and business value. Our interest spans generative AI in vertical domains, natural language interfaces that simplify complex workflows, and computer vision for industrial, security, and automation use cases. We also actively look at AI solutions in supply chains, where predictive and optimization tools can drive operational efficiency, as well as horizontal platforms that empower developers, analysts, or non-technical users across industries. In parallel, we are increasingly drawn to startups addressing the new challenges that AI adoption creates for enterprises – such as model governance, compliance, observability, and responsible deployment at scale. Across all these areas, we prioritize teams that pair deep technical expertise with experienced executioners who can translate innovation into scalable, commercially viable products.

How do you view AI’s impact on traditional industries? Are there specific AI technologies you believe will be especially transformative in certain sectors?

We see AI driving fundamental change across traditional industries by rethinking core workflows, improving efficiency, and enabling new business models. This is already evident across our portfolio. At Workiz, AI powers “Jessica,” a virtual voice dispatcher that automates scheduling and customer interaction for field service teams, boosting efficiency and professionalism in a high-friction operational environment. Onebeat applies AI in retail to optimize inventory allocation and real-time merchandising, helping retailers respond dynamically to demand and increase margins. Sensos brings intelligence to logistics and supply chains, using AI to enable predictive tracking, risk monitoring, and real-time visibility for global operations.

We believe technologies like generative AI, computer vision, and domain-specific natural language models will continue to be especially transformative in industries such as logistics, retail, healthcare, and financial services. The most impactful solutions are those that embed AI deeply into existing workflows and deliver measurable ROI in complex, real-world environments.

What specific AI trends in Israel do you see as having strong exit potential in the next five years? Are there niches where you believe Israeli startups particularly excel?

We see strong exit potential across a broad range of AI-driven sectors in Israel, combining deep technical capabilities with strong commercial execution. Core areas like cybersecurity and developer tools continue to perform well, with AI used to solve clear enterprise pain points. Physical AI is an area where Israeli startups are particularly well positioned, building systems that combine perception, decision making, and interaction with the physical world. These companies are creating real value in complex environments that require precision, speed, and adaptability.

Beyond these core strengths, we are also seeing increasing activity in emerging white spaces where AI adoption is still early but accelerating. These include areas where workflows are data-intensive, manual, or fragmented, and where AI can deliver measurable improvements in efficiency, cost, and decision quality. We also see growing demand for AI infrastructure around governance, observability, and safe deployment at scale. Israeli startups excel at building in these conditions, with teams that combine strong technical depth, entrepreneurial agility, and a global mindset—creating a foundation for meaningful exits in the years ahead.

Are there gaps or missing segments in the Israeli AI landscape that you’ve identified? What types of AI founders are you especially looking to back right now in Israel?

We see strong exit potential across a wide spectrum of AI-driven sectors in Israel, supported by a combination of deep technical expertise and strong execution. Cybersecurity continues to be a standout area, where AI is enabling more adaptive and proactive threat detection, creating real differentiation in a crowded global market. Fintech is another domain seeing strong momentum, with AI powering smarter decision making, automation of complex workflows, and better risk management.

Physical AI is emerging as a compelling opportunity, where Israeli startups are building systems that combine perception, reasoning, and real-world interaction. These technologies are gaining traction in environments that demand high levels of autonomy, precision, and reliability.

In parallel, we see increasing activity in emerging white spaces where AI can transform legacy processes and bring step changes in productivity and insight. There is also growing demand for tools that support AI governance, monitoring, and responsible deployment at scale. Israeli teams are particularly strong at executing in these areas, combining technical depth with a global, product-driven mindset that positions them well for meaningful outcomes.



Source link

Continue Reading

Tools & Platforms

Beyond Hype and Fluff: Lessons for AI from 25 Years of EdTech

Published

on


  • This blog is by Rod Bristow is CEO of College Online which provides access to lifelong learning, Chair of Council at the University of Bradford, Visiting Professor at the UCL Institute of Education, Chair of the Kortext Academic Advisory Board and former President at Pearson.

I am an advocate for education technology. It is a growing force for good, providing great solutions to real problems:

  • Reducing teacher workload through lesson planning, curriculum development, homework submission and marking, formative assessments, course management systems and more;
  • Improving learning outcomes through engaging, immersive experiences, adaptive assessments and the generation of rich data about learning;
  • Widening access to content and tools through aggregation platforms across thousands of publishers and millions of textbooks; and
  • Widening access to courses and qualifications for the purpose of lifelong learning using online and blended modes of delivery.

Products and services that solve these problems will continue to take root.

All that said, we have not seen the widespread transformation in education that technology promised to deliver, and investors have had their fingers burned. We could argue this results from unrealistic expectations rather than poor achievement, but there are lessons to be learned.

According to HolonIQ:

2024 saw $2.4 billion of EdTech Venture Capital, representing the lowest level of investment since 2015. The hype of 2021 is well and truly over, with investors seeking fundamentals over ‘fluff’.

From HolonIQ

The chart says it all. Steady growth in investment over the last decade culminated in a huge peak during Covid. Hype and ‘fluff’ overtook rational thinking, and several superficially attractive businesses spiked and then plummeted in value. In education, details and evidence of impact (or efficacy) matter. Without them, lasting scale is much harder to achieve.

The pendulum has now swung the other way, with investors harder to convince. Investors and entrepreneurs need to ask the question, ‘Does it work?’ before considering how it scales. If they do, they will see plenty of applications that both work and scale, and better-educated investors will be good for the sector.

One of the biggest barriers to scale is the complexity of implementation with teachers, without whom there is little impact. Without getting into the debate about teacher autonomy, most teachers like to do their own thing. And products which bypass teachers, marketed directly to consumers, often struggle to show as much impact and financial return.

Will things be different with AI? The technology, being many times more powerful, will handle much greater flexibility of implementation for teachers than we have seen so far. AI has even greater potential to solve real problems: widening access to learning, saving time for teachers and engaging learners through adaptive digital formative assessment and deeply immersive learning experiences through augmented reality.

But risks of ‘over-selling’ the benefits of AI technologies are potentially heightened by its very power. AI can generate mind-boggling ‘solutions’ for learners which dramatically reduce workload. Some of these are good in making learning more efficient, but questions of efficacy remain. Learning intrinsically requires work: it is done by you, not to you. Technology should not try to make learning easy, but to make hard work stimulating and productive if it is to sustain over the long term.

There is a clear and present danger that AI will undermine learning if high-stakes assessments relying on coursework do not keep pace with the reality of AI. This is a risk yet to be gripped by regulators. There is also little evidence that, for example, AI will ever replace the inspiration of human teachers, and those saying their solutions will do so must make a very strong case. Technology companies can help, but they can also do harm.

New technologies must be grounded in what improves learning, especially when unleashing the power of AI. This is entirely possible.

There are many areas of great promise, but none more so than the enormous expansion in online access to lifelong learning for working people who are otherwise denied the education they need. There are now eight million people (mainly adults) studying for degrees online and tens of millions of people taking shorter online skills courses. Opening access to lifelong learning to everyone remains education’s biggest unmet need and opportunity. Education technologies can be ‘designed in’ to the entire learning experience from the beginning, rather than retrofitted by overworked teachers. Widening access to lifelong learning could deliver a greater transformation to the economy and society than we have seen in 100 years.

Learning tools and platforms are one thing, but what do people need to learn in a world changed by AI? Much has been written about the potential for technology and especially AI to change what people need to learn. A popular narrative is that skills will be more important than knowledge; that knowledge can be so easily searched through the internet or created with AI, there is no need for it to be learned.

Skills do matter, but these statements are wrong. We should not choose between skills and knowledge. Skills are a representation of knowledge. With no knowledge or expertise, there is no skill. More than that, in a world in which AI will have an unimaginable impact on society, we should remember that knowledge provides the very basis of our ability to think and that human memory is the residue of thought.

Only a deeper understanding of learning and the real problems we need to solve will unleash the huge potential for technology to unlock wider access, a better learning experience and higher outcomes. To simultaneously hold the benefits and the risks of AI in a firm embrace, we will need courage, imagination and clarity about the problems to be solved before we get swept up in the hype and fluff. The opportunity is too big to put at risk.



Source link

Continue Reading

Trending