AI Insights
How companies can begin to implement artificial intelligence

More than 71% of companies are utilizing generative artificial intelligence (AI) in at least one business function, according to the latest global research conducted by the consulting firm McKinsey. AI is most commonly applied in marketing and sales, product and service development, customer support, and software engineering, where it can deliver the greatest potential value. Although domestic companies express a desire not to fall behind in the AI revolution, experience shows that many abandon the process halfway—often due to lack of knowledge, lengthy implementation, and the delayed arrival of tangible results. To avoid this, experts suggest that the easiest way to enter the world of artificial intelligence is “through small doors,” by introducing AI at the most basic level and integrating smart tools into smaller sectors.
This, of course, depends on the size of the company, the industry in which it operates, and the need for applying smart technology. While larger organizations and those in technology-intensive sectors adopt AI more quickly and comprehensively, smaller firms encounter numerous challenges from the outset—such as employee readiness, the availability of skilled staff, and the company’s strategic goals—which is why AI use often remains limited to pilot projects or tools for handling specific tasks.
One of the biggest concerns for all companies is the security of data shared with artificial intelligence. As generative models become more sophisticated, risks are also increasing. Many real-world cases show that adversarial attacks are no longer just hypothetical but occur daily, even targeting the largest companies, which is why this area requires serious oversight.
Still, while some companies wait for the right moment, others are already reaping the benefits. The adoption of artificial intelligence across industries is growing by up to 20% annually. From 2023 to 2024 alone, the use of generative AI jumped from 55% to 75%. According to Maks Belov, technical director at IT company Coherent Solutions, artificial intelligence does not need to be revolutionary—it first needs to be practical. Too often, companies rush to adopt the latest shiny tools as part of their transformation plans, only to realize they have overspent money, time, and energy on systems and processes without clear goals or execution paths.
Nebojša Matić, owner of Mikroelektronika, a company that develops hardware and software solutions and has a department dedicated to artificial intelligence, told NIN that the easiest way to apply AI is by using ChatGPT. He emphasizes that this application should not be seen as a tool to finish the job for you, but rather as one that facilitates your work and helps complete tasks more quickly.
The ChatGPT mobile application has generated two billion dollars in revenue since its launch in May 2023, with an average revenue per installation of $2.91, according to the latest report by Epfigurs. In the first seven months of 2025 alone, ChatGPT earned $1.35 billion from installations—an increase of 673% compared to the same period last year. Its closest rival, “Grok,” earned only $25.6 million from installations during the same period, meaning ChatGPT’s revenue was 53 times greater.
Matić notes that ChatGPT primarily takes over routine clerical tasks, such as drafting a professional email, preparing contracts between parties for rental agreements, or even comparing data. Still, he stresses that its outputs should not be taken at face value, as the technology is not yet fully mature, and users must ultimately rely on their own expertise to ensure accuracy.
AI usage by sectors in organizations:
Field of application Percentage
Marketing and sales 42%
Product and service development 28%
IT 23%
Software engineering 18%
Human resources 13%
Legal affairs 11%
Supply chain/Inventory management 9%
Manufacturing 5%
Source: McKinsey
This finding is reinforced by McKinsey, whose research shows that, despite the enthusiasm surrounding the potential of artificial intelligence and the impressive capabilities of smart tools, the first generation of large language models faced limitations that significantly restricted their use at the enterprise level. Large language models (LLMs) can produce inaccurate outputs, making them difficult to trust in environments where accuracy and consistency are critical. They are inherently passive—unable to act independently or initiate workflows and decisions without human input. Moreover, they struggle to manage complex processes that require multiple steps, decision points, and logical reasoning.
AI poses the greatest challenge for small and medium enterprises
According to a domestic survey published earlier this year by ICT Hub, conducted in collaboration with the Serbian Association of Managers (SAM) and Reprezent Communications, only 34% of companies in Serbia utilize artificial intelligence (AI) in their operations, while the majority have yet to integrate this technology into their processes. The highest application of AI technologies has been recorded in the information, communication, and professional services sectors, where 60% of firms employ artificial intelligence.
Market observers note that domestic companies with 200 to 500 employees, which have managed to cope with digitization to a certain extent, feel the need to catch the “AI wave”, especially out of fear that competition will overtake them. However, the biggest issues include inadequate knowledge of the topic, underdeveloped infrastructure, and, most importantly, a lack of understanding of how AI can aid in their business processes.
PROFIMEDIA / Ryan Lees / Hoxton / Profimedia
Filip Karaičić, CEO of the domestic company “Kvantoks Technology,” states that in practice, the initial process of implementing AI is usually quite “painful,” lengthy, and often ends unsuccessfully. He indicates that at the very start, a significant amount of initial consulting activities is required to understand what the company does, its processes, where the most time is wasted, where the most errors occur, and where there is the most room for improvement. Only after this is determined can a strategy for AI implementation be considered.
However, a frequent obstacle in the next step is disorganized data, which is often quite fragmented and scattered or duplicated across various services, servers, and vendors. This necessitates additional work and time to clean up, consolidate, and adapt everything for use in AI solutions. When this is all “put on paper,” it becomes clear why the entire process appears as a massive expense—both financial and temporal—for the firm, and it can often be an even greater problem for employees.
Organizations using AI in at least one function
Field of operation Percentage
Technologies 88%
Professional services 80%
Advanced industry 79%
Media and telecommunications 79%
Consumer goods and retail 68%
Finance 65%
Healthcare and pharmaceuticals 63%
Energy 59 %
According to a domestic survey published earlier this year by ICT Hub, in collaboration with the Serbian Association of Managers (SAM) and Reprezent Communications, only 34% of companies in Serbia currently use artificial intelligence (AI) in their operations, while the majority have yet to integrate this technology into their workflows. The highest levels of adoption have been recorded in the information, communication, and professional services sectors, where 60% of firms employ AI.
Market observers note that Serbian companies with 200 to 500 employees—those that have managed to advance digitalization to a certain degree—now feel the need to ride the “AI wave,” driven largely by fear of being overtaken by the competition. Still, their main obstacles remain limited knowledge of the field, underdeveloped infrastructure, and, most importantly, a lack of clarity about how AI can actually support their business processes.
AI is easiest to introduce in repetitive jobs
Despite the fact that the largest investments in artificial intelligence come from the most valuable companies in the world, business transformation influenced by AI is no longer reserved solely for global giants and the technology sector. Companies of all sizes and industries are finding ways to leverage AI for business enhancement, the introduction of innovations, and understanding the needs of their customers.
Prof. Dr. Nebojša Bačanin Džakula, Deputy Dean for Teaching and Development in Artificial Intelligence at the University of “Singidunum,” states that depending on the nature of the company’s business, the easiest way to enter the world of AI is to start with the application of advanced chatbots and the introduction of virtual assistants. These tools are now taking first contact with customers in numerous companies, whether in banking, telecommunications, or e-commerce, all aimed at providing better customer service.
The same is true for applications in call centers, where advanced deep learning models can automatically recognize the user’s intent and classify their request. They help to understand why the user is calling, and can even resolve certain issues without involving an operator. This significantly reduces pressure on employees, and customers receive quicker responses.
“AI can also be easily utilized in translation and proofreading tasks, as AI-based tools provide excellent real-time translations that are contextually appropriate. Text proofreading programs can assist in correcting grammatical, stylistic, and other errors, offering significant support in journalism, education, and administration in general. AI can be smoothly integrated into administrative tasks, such as sorting work documents, and can be applied in calculations and task assignments. Additionally, its support in marketing is already evident, where AI easily creates personalized messages and develops specific content,” explains the professor.
AI Insights
Artificial Intelligence Legal Holds: Preserving Prompts & Outputs

You are your company’s in-house legal counsel. It’s 3 PM on a Friday (because of course it is), and you’ve just received notice of impending litigation. Your first thought? “Time to issue a legal hold.” Your second thought, as you watch your colleague casually chatting with Claude about contract drafting? “Oh no… what about all the AI stuff?”
Welcome to 2025, where your legal hold obligations just got an AI-powered upgrade you never signed up for. This isn’t just theoretical hand-wringing. Companies are already being held accountable for incomplete AI-related preservation, and the costs are real — both in terms of litigation exposure and the scramble to retrofit compliance systems that never anticipated chatbots.
The Plot Twist Nobody Saw Coming
Remember when legal holds meant telling people not to delete their emails? The foundational duty to preserve electronically stored information (ESI) when litigation is “reasonably anticipated” remains the cornerstone of legal hold obligations. However, generative AI’s emergence has significantly complicated this well-established framework. Courts are increasingly making clear that AI-generated content, including prompts and outputs, constitutes ESI subject to traditional preservation obligations.
Those were simpler times. Now, every prompt your team types into ChatGPT, every AI-generated marketing copy, and yes, even that time someone asks Perplexity for restaurant recommendations during a business trip — it’s all potentially discoverable ESI.
Or so say several recent court decisions:
- In the In re OpenAI, Inc. Copyright Infringement Litigation MDL (SDNY), Magistrate Judge Ona T. Wang ordered OpenAI to preserve and segregate all output log data that would otherwise be deleted (whether deletion would occur by user choice or to satisfy privacy laws). Judge Sidney H. Stein later denied OpenAI’s objection and left the order standing (now on appeal to the Second Circuit). This is the clearest signal yet that courts will prioritize litigation preservation over default deletion settings.
- In Tremblay v. OpenAI (N.D. Cal.), the district court issued a sweeping order requiring OpenAI “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” The Tremblay court dropped a truth bomb on us: AI inputs — prompts — can be discoverable.
- And although not AI-specific, recent chat-spoliation rulings (e.g., Google’s chat auto-delete practices) show that judges expect parties to suspend auto-delete once litigation is reasonably anticipated. These cases serve as analogs for AI chat tools.
Your New Reality Check: What Actually Needs Preserving?
Let’s break down what’s now on your preservation radar:
The Obvious Stuff:
- Every prompt typed into AI tools (yes, even the embarrassing ones)
- All AI-generated outputs used for business purposes
- The metadata showing who, what, when, and which AI model
The Not-So-Obvious Stuff:
- Failed queries and abandoned outputs (they still count!)
- Conversations in AI-powered Slack bots and Teams integrations
- That “quick question” someone asked Claude about a competitor
The “Are You Kidding Me?” Stuff:
- Deleted conversations (spoiler alert: they’re often not really deleted)
- Personal AI accounts used for work purposes
- AI-assisted research that never made it into final documents
Of course, knowing what to preserve is only half the battle. The real challenge? Actually implementing AI-aware legal holds when your IT department is still figuring out how to monitor these tools, your employees are using personal accounts for work-related AI, and new AI integrations appear in your tech stack on a weekly basis.
Next week, we’ll dive into the practical playbook for AI preservation — including the compliance frameworks that actually work, the vendor questions you should be asking, and why your current legal hold software might be more helpful than you think (or more useless than you fear).
P.S. – Yes, this blog post was ideated, outlined, and brooded over with the assistance of AI. Yes, we preserved the prompts. Yes, we’re practicing what we preach. No, we’re not perfect at it yet either.
AI Insights
AI firm Anthropic agrees to pay authors $1.5bn for pirating work

Artificial intelligence (AI) firm Anthropic has agreed to pay $1.5bn (£1.11bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.
The deal, which requires the approval of US District Judge William Alsup, would be the largest publicly-reported copyright recovery in history, according to lawyers for the authors.
It comes two months after Judge Alsup found that using books to train AI did not violate US copyright law, but ordered Anthropic to stand trial over its use of pirated material.
Anthropic said on Friday that the settlement would “resolve the plaintiffs’ remaining legacy claims.”
The settlement comes as other big tech companies including ChatGPT-maker OpenAI, Microsoft, and Instagram-parent Meta face lawsuits over similar alleged copyright violations.
Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors.
“We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, Deputy General Counsel at Anthropic which is backed by both Amazon and Google-parent Alphabet.
The lawsuit was filed against Anthropic last year by best-selling mystery thriller writer Andrea Bartz, whose novels include We Were Never Here, along with The Good Nurse author Charles Graeber and The Feather Thief author Kirk Wallace Johnson.
They accused the company of stealing their work to train its Claude AI chatbot in order to build a multi-billion dollar business.
The company holds more than seven million pirated books in a central library, according to Judge Alsup’s June decision, and faced up to $150,000 in damages per copyrighted work.
His ruling was among the first to weigh in on how Large Language Models (LLMs) can legitimately learn from existing material.
It found that Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law.
But he rejected Anthropic’s request to dismiss the case.
Anthropic was set to stand trial in December over its use of pirated copies to build its library of material.
Plaintiffs lawyers called the settlement announced Friday “the first of its kind in the AI era.”
“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said lawyer Justin Nelson representing the authors. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”
The settlement could encourage more cooperation between AI developers and creators, according to Alex Yang, Professor of Management Science and Operations at London Business School.
“You need that fresh training data from human beings,” Mr Yang said. “If you want to grant more copyright to AI-created content, you must also strengthen mechanisms that compensate humans for their original contributions.”
AI Insights
Duke University pilot project examining pros and cons of using artificial intelligence in college | National News

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.
For any issues, contact jgnews@jg.net or call 260-461-8773.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi