AI Research
How generative artificial intelligence is affecting demand for legal services and need for ‘AI fluency’

Ari Kaplan recently spoke with Subroto Mukerji, the CEO of Integreon, an alternative legal and global managed services provider.
They discussed how generative artificial intelligence is affecting the demand for legal services, the value of “AI fluency” in the legal field, and how the legal operations discipline is evolving as generative AI becomes increasingly embedded in legal workflows.
Ari Kaplan: Tell us about your background and your role at Integreon.
Subroto Mukerji: I grew up in India and spent about 30 years working with large technology companies, including 20 years with HP and its predecessors, HPE and Compaq, among others. I then moved to Rackspace, initially as its COO, and later as the president of the company’s $2.5 billion Americas business. About two-and-a-half years ago, I joined Integreon. It was the same week that ChatGPT launched, and it has been an exciting experience ever since.
Ari Kaplan: How should legal teams think about generative AI today and adapt to its continued development?
Subroto Mukerji: It’s time for legal teams to shift their use of generative AI from an experimentation mindset to an intentionality mindset because generative AI adoption is no longer about pilot projects or novelty use cases. It is about strategically embedding these tools into core workflows. Legal teams should ask where generative AI can meaningfully reduce their burdens without introducing new risks and deploy it. They also need to build “AI fluency” across their teams. You don’t need everyone to be an AI engineer, but legal professionals must understand where generative AI adds value, its limitations and how to govern its use. Leaders must establish an appropriate governance framework before they can scale, and to do that, they should find a partner that can help them generate the outcomes they want without engaging in complicated evaluations of tools, leading to tool fatigue.
Ari Kaplan: How is generative AI affecting the demand for legal services?
Subroto Mukerji: Legal has traditionally been a field with limited supply. The potential demand for legal services greatly exceeds the resources available at certain price levels to meet it, and generative AI is increasing supply to address that demand, which is a positive development for our industry.
Ari Kaplan: What is AI fluency in legal?
Subroto Mukerji: AI fluency involves understanding the capabilities and limitations of AI tools and helping legal professionals effectively incorporate them into their workflows. First, you should learn the basics of AI, so listeners need to build a fundamental understanding of AI technologies and their significance in legal work. Second, practical experience is essential. I encourage teams to use AI tools and support that with training and feedback. It’s important to remember that the AI available today is the worst it will ever be. Future AI will improve over today’s versions, and incremental updates will continue to get better, so don’t let perfect be the enemy of good. You can’t wait for tools to be perfect before trying them. It’s crucial to start using generative AI now, recognizing its limitations and knowing how to address them.
Ari Kaplan: What do corporate legal departments need to do to achieve AI fluency?
Subroto Mukerji: Law department leaders should identify individuals who are genuinely excited about deploying AI and provide them with support to use it safely and practically. They should also educate these individuals on the risks and mitigation strategies. Additionally, partnering with an external provider that has thoroughly evaluated the available tools is critical, especially in large corporate organizations. In-house teams should avoid rushing into an AI application because internal evaluation and decision-making typically take a long time, while the development cycle for new technology is usually short. This means better tools may become available by the time you finalize a previous selection. To prevent long-term commitments to a single tool, consider purchasing solutions from third-party providers as a service, allowing you to switch seamlessly if a better product enters the market without wasting your investment.
Ari Kaplan: What separates those legal departments that are thriving with AI tools and those that are still struggling?
Subroto Mukerji: In any adoption cycle, there are early adopters, late adopters and laggards. What typically sets early adopters apart is their education and fluency, along with a clear understanding that those who adopt technology early will keep increasing their gains. Those using generative AI today will widen the gap between themselves and nonusers, making it harder for fast followers to catch up because today’s users will retain their advantage as the benefits grow.
Ari Kaplan: What are the benefits of being a technology-agnostic organization in the age of generative AI?
Subroto Mukerji: At the foundational level, there must be an understanding of what this technology is, what it can do, its limitations and its potential. Once you have that basic understanding, being technology agnostic often means prioritizing client outcomes over vendor loyalty. Although Integreon has partnered with many technology providers, we are very transparent with our partners and customers that we do not promote a single platform or product. We evaluate technology based on how well it addresses a client’s specific problem and then propose that solution. Most enterprise clients already have an existing technology installed base, so it’s important to work with a knowledgeable partner who understands how to navigate the existing infrastructure for a seamless deployment. There’s an old joke that God could create the universe in seven days because there was no installed base. Once you understand the installed base and the client’s problem, you can look at the available solutions and recommend the appropriate application. Building strong partnerships with many technology providers allows us to carefully assess each unique situation and suggest the best solution for our client.
Ari Kaplan: How does Integreon formally evaluate technology vendors?
Subroto Mukerji: Integreon has always been a highly tech-enabled company and employs a core team of professionals who monitor the market to track how technology is evolving. In recent years, we established a chief technology officer role within the company and hired a strong leader for that position. The team combines its own experience and market insight with feedback, selection processes and practical benefits from our large enterprise customers to continuously evaluate what’s available, pilot new solutions and assess their functionality.
Ari Kaplan: How do you see law department operations evolving as generative AI becomes embedded into legal workflows?
Subroto Mukerji: Law departments will begin benefiting from a combination of legal advice and advanced technology to significantly increase the speed and reduce the cost of supporting their business units. They will also reevaluate the billable hour model offered by law firms and adopt more cost-effective resources powered by technology, enabling ALSPs to play a larger role in helping companies strike the right balance of in-house talent, technical expertise and external support.
Listen to the complete interview at Reinventing Professionals.
Ari Kaplan regularly interviews leaders in the legal industry and in the broader professional services community to share perspective, highlight transformative change and introduce new technology at his blog and on Apple Podcasts.
This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.
AI Research
Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal
AI Research
OpenAI business to burn $115 billion through 2029 The Information

OpenAI CEO Sam Altman walks on the day of a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room at the White House in Washington, D.C., U.S., September 4, 2025.
Brian Snyder | Reuters
OpenAI has sharply raised its projected cash burn through 2029 to $115 billion as it ramps up spending to power the artificial intelligence behind its popular ChatGPT chatbot, The Information reported on Friday.
The new forecast is $80 billion higher than the company previously expected, the news outlet said, without citing a source for the report.
OpenAI, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, some $1.5 billion higher than its projection from earlier this year, the report said.
The company did not immediately respond to Reuters request for comment.
To control its soaring costs, OpenAI will seek to develop its own data center server chips and facilities to power its technology, The Information said.
OpenAI is set to produce its first artificial intelligence chip next year in partnership with U.S. semiconductor giant Broadcom, the Financial Times reported on Thursday, saying OpenAI plans to use the chip internally rather than make it available to customers.
The company deepened its tie-up with Oracle in July with a planned 4.5-gigawatts of data center capacity, building on its Stargate initiative, a project of up to $500 billion and 10 gigawatts that includes Japanese technology investor SoftBank. OpenAI has also added Alphabet’s Google Cloud among its suppliers for computing capacity.
The company’s cash burn will more than double to over $17 billion next year, $10 billion higher than OpenAI’s earlier projection, with a burn of $35 billion in 2027 and $45 billion in 2028, The Information said.
AI Research
Who is Shawn Shen? The Cambridge alumnus and ex-Meta scientist offering $2M to poach AI researchers

Shawn Shen, co-founder and Chief Executive Officer of the artificial intelligence (AI) startup Memories.ai, has made headlines for offering compensation packages worth up to $2 million to attract researchers from top technology companies. In a recent interview with Business Insider, Shen explained that many scientists are leaving Meta, the parent company of Facebook, due to constant reorganisations and shifting priorities.“Meta is constantly doing reorganizations. Your manager and your goals can change every few months. For some researchers, it can be really frustrating and feel like a waste of time,” Shen told Business Insider, adding that this is a key reason why researchers are seeking roles at startups. He also cited Meta Chief Executive Officer Mark Zuckerberg’s philosophy that “the biggest risk is not taking any risks” as a motivation for his own move into entrepreneurship.With Memories.ai, a company developing AI capable of understanding and remembering visual data, Shen is aiming to build a niche team of elite researchers. His company has already recruited Chi-Hao Wu, a former Meta research scientist, as Chief AI Officer, and is in talks with other researchers from Meta’s Superintelligence Lab as well as Google DeepMind.
From full scholarships to Cambridge classrooms
Shen’s academic journey is rooted in engineering, supported consistently by merit-based scholarships. He studied at Dulwich College from 2013 to 2016 on a full scholarship, completing his A-Level qualifications.He then pursued higher education at the University of Cambridge, where he was awarded full scholarships throughout. Shen earned a Bachelor of Arts (BA) in Engineering (2016–2019), followed by a Master of Engineering (MEng) at Trinity College (2019–2020). He later continued at Cambridge as a Meta PhD Fellow, completing his Doctor of Philosophy (PhD) in Engineering between 2020 and 2023.
Early career: Internships in finance and research
Alongside his academic pursuits, Shen gained early experience through internships and analyst roles in finance. He worked as a Quantitative Research Summer Analyst at Killik & Co in London (2017) and as an Investment Banking Summer Analyst at Morgan Stanley in Shanghai (2018).Shen also interned as a Research Scientist at the Computational and Biological Learning Lab at the University of Cambridge (2019), building the foundations for his transition into advanced AI research.
From Meta’s Reality Labs to academia
After completing his PhD, Shen joined Meta (Reality Labs Research) in Redmond, Washington, as a Research Scientist (2022–2024). His time at Meta exposed him to cutting-edge work in generative AI, but also to the frustrations of frequent corporate restructuring. This experience eventually drove him toward building his own company.In April 2024, Shen began his academic career as an Assistant Professor at the University of Bristol, before launching Memories.ai in October 2024.
Betting on talent with $2M offers
Explaining his company’s aggressive hiring packages, Shen told Business Insider: “It’s because of the talent war that was started by Mark Zuckerberg. I used to work at Meta, and I speak with my former colleagues often about this. When I heard about their compensation packages, I was shocked — it’s really in the tens of millions range. But it shows that in this age, AI researchers who make the best models and stand at the frontier of technology are really worth this amount of money.”Shen noted that Memories.ai is looking to recruit three to five researchers in the next six months, followed by up to ten more within a year. The company is prioritising individuals willing to take a mix of equity and cash, with Shen emphasising that these recruits would be treated as founding members rather than employees.By betting heavily on talent, Shen believes Memories.ai will be in a strong position to secure additional funding and establish itself in the competitive AI landscape.His bold $2 million offers may raise eyebrows, but they also underline a larger truth: in today’s technology race, the fiercest competition is not for customers or capital, it’s for talent.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi