Connect with us

AI Research

How generative artificial intelligence is affecting demand for legal services and need for ‘AI fluency’

Published

on


Ari Kaplan recently spoke with Subroto Mukerji, the CEO of Integreon, an alternative legal and global managed services provider.

They discussed how generative artificial intelligence is affecting the demand for legal services, the value of “AI fluency” in the legal field, and how the legal operations discipline is evolving as generative AI becomes increasingly embedded in legal workflows.

Ari Kaplan: Tell us about your background and your role at Integreon.

Subroto Mukerji: I grew up in India and spent about 30 years working with large technology companies, including 20 years with HP and its predecessors, HPE and Compaq, among others. I then moved to Rackspace, initially as its COO, and later as the president of the company’s $2.5 billion Americas business. About two-and-a-half years ago, I joined Integreon. It was the same week that ChatGPT launched, and it has been an exciting experience ever since.

Ari Kaplan: How should legal teams think about generative AI today and adapt to its continued development?

Subroto Mukerji is the CEO of Integreon, an alternative legal and global managed services provider.

Subroto Mukerji: It’s time for legal teams to shift their use of generative AI from an experimentation mindset to an intentionality mindset because generative AI adoption is no longer about pilot projects or novelty use cases. It is about strategically embedding these tools into core workflows. Legal teams should ask where generative AI can meaningfully reduce their burdens without introducing new risks and deploy it. They also need to build “AI fluency” across their teams. You don’t need everyone to be an AI engineer, but legal professionals must understand where generative AI adds value, its limitations and how to govern its use. Leaders must establish an appropriate governance framework before they can scale, and to do that, they should find a partner that can help them generate the outcomes they want without engaging in complicated evaluations of tools, leading to tool fatigue.

Ari Kaplan: How is generative AI affecting the demand for legal services?

Subroto Mukerji: Legal has traditionally been a field with limited supply. The potential demand for legal services greatly exceeds the resources available at certain price levels to meet it, and generative AI is increasing supply to address that demand, which is a positive development for our industry.

Ari Kaplan: What is AI fluency in legal?

Subroto Mukerji: AI fluency involves understanding the capabilities and limitations of AI tools and helping legal professionals effectively incorporate them into their workflows. First, you should learn the basics of AI, so listeners need to build a fundamental understanding of AI technologies and their significance in legal work. Second, practical experience is essential. I encourage teams to use AI tools and support that with training and feedback. It’s important to remember that the AI available today is the worst it will ever be. Future AI will improve over today’s versions, and incremental updates will continue to get better, so don’t let perfect be the enemy of good. You can’t wait for tools to be perfect before trying them. It’s crucial to start using generative AI now, recognizing its limitations and knowing how to address them.

Ari Kaplan: What do corporate legal departments need to do to achieve AI fluency?

Subroto Mukerji: Law department leaders should identify individuals who are genuinely excited about deploying AI and provide them with support to use it safely and practically. They should also educate these individuals on the risks and mitigation strategies. Additionally, partnering with an external provider that has thoroughly evaluated the available tools is critical, especially in large corporate organizations. In-house teams should avoid rushing into an AI application because internal evaluation and decision-making typically take a long time, while the development cycle for new technology is usually short. This means better tools may become available by the time you finalize a previous selection. To prevent long-term commitments to a single tool, consider purchasing solutions from third-party providers as a service, allowing you to switch seamlessly if a better product enters the market without wasting your investment.

Ari Kaplan: What separates those legal departments that are thriving with AI tools and those that are still struggling?

Subroto Mukerji: In any adoption cycle, there are early adopters, late adopters and laggards. What typically sets early adopters apart is their education and fluency, along with a clear understanding that those who adopt technology early will keep increasing their gains. Those using generative AI today will widen the gap between themselves and nonusers, making it harder for fast followers to catch up because today’s users will retain their advantage as the benefits grow.

Ari Kaplan: What are the benefits of being a technology-agnostic organization in the age of generative AI?

Subroto Mukerji: At the foundational level, there must be an understanding of what this technology is, what it can do, its limitations and its potential. Once you have that basic understanding, being technology agnostic often means prioritizing client outcomes over vendor loyalty. Although Integreon has partnered with many technology providers, we are very transparent with our partners and customers that we do not promote a single platform or product. We evaluate technology based on how well it addresses a client’s specific problem and then propose that solution. Most enterprise clients already have an existing technology installed base, so it’s important to work with a knowledgeable partner who understands how to navigate the existing infrastructure for a seamless deployment. There’s an old joke that God could create the universe in seven days because there was no installed base. Once you understand the installed base and the client’s problem, you can look at the available solutions and recommend the appropriate application. Building strong partnerships with many technology providers allows us to carefully assess each unique situation and suggest the best solution for our client.

Ari Kaplan: How does Integreon formally evaluate technology vendors?

Subroto Mukerji: Integreon has always been a highly tech-enabled company and employs a core team of professionals who monitor the market to track how technology is evolving. In recent years, we established a chief technology officer role within the company and hired a strong leader for that position. The team combines its own experience and market insight with feedback, selection processes and practical benefits from our large enterprise customers to continuously evaluate what’s available, pilot new solutions and assess their functionality.

Ari Kaplan: How do you see law department operations evolving as generative AI becomes embedded into legal workflows?

Subroto Mukerji: Law departments will begin benefiting from a combination of legal advice and advanced technology to significantly increase the speed and reduce the cost of supporting their business units. They will also reevaluate the billable hour model offered by law firms and adopt more cost-effective resources powered by technology, enabling ALSPs to play a larger role in helping companies strike the right balance of in-house talent, technical expertise and external support.


Listen to the complete interview at Reinventing Professionals.

Ari Kaplan regularly interviews leaders in the legal industry and in the broader professional services community to share perspective, highlight transformative change and introduce new technology at his blog and on Apple Podcasts.


This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.





Source link

AI Research

Nvidia says ‘We never deprive American customers in order to serve the rest of the world’ — company says GAIN AI Act addresses a problem that doesn’t exist

Published

on


The bill, which aimed to regulate shipments of AI GPUs to adversaries and prioritize U.S. buyers, as proposed by U.S. senators earlier this week, made quite a splash in America. To a degree, Nvidia issued a statement claiming that the U.S. was, is, and will remain its primary market, implying that no regulations are needed for the company to serve America.

“The U.S. has always been and will continue to be our largest market,” a statement sent to Tom’s Hardware reads. “We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips. While it may have good intentions, this bill is just another variation of the AI Diffusion Rule and would have similar effects on American leadership and the U.S. economy.”



Source link

Continue Reading

AI Research

OpenAI Projects $115 Billion Cash Burn by 2029

Published

on


OpenAI has sharply raised its projected cash burn through 2029 to $115 billion, according to The Information. This marks an $80 billion increase from previous estimates, as the company ramps up spending to fuel the AI behind its ChatGPT chatbot.

The company, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, about $1.5 billion higher than its earlier forecast. The surge in spending comes as OpenAI seeks to maintain its lead in the rapidly growing artificial intelligence market.


To control these soaring costs, OpenAI plans to develop its own data center server chips and facilities to power its technology.


The company is partnering with U.S. semiconductor giant Broadcom to produce its first AI chip, which will be used internally rather than made available to customers, as reported by The Information.


In addition to this initiative, OpenAI has expanded its partnership with Oracle, committing to a 4.5-gigawatt data center capacity to support its growing operations.


This is part of OpenAI’s larger plan, the Stargate initiative, which includes a $500 billion investment and is also supported by Japan’s SoftBank Group. Google Cloud has also joined the group of suppliers supporting OpenAI’s infrastructure.


OpenAI’s projected cash burn will more than double in 2024, reaching over $17 billion. It will continue to rise, with estimates of $35 billion in 2027 and $45 billion in 2028, according to The Information.

Tags





Source link

Continue Reading

AI Research

PromptLocker scared ESET, but it was an experiment

Published

on


The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.

On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.

The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.

According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.

Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.

Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.



Source link

Continue Reading

Trending