Business
Soft2Bet: Addressing Privacy Concerns of Using AI in Business

Nowadays, artificial intelligence is no longer just an interesting novelty from the world of technology, it is firmly embedded in our daily lives. Just think about it: chatbots answer any of your questions around the clock, and smart algorithms literally read your mind, selecting content that perfectly suits your tastes and preferences. But here comes a rather important question: how permissible is it to allow AI to learn from copyrighted information? Agreed, the question is far from simple!
This ethical dilemma makes us wonder where the fine line between innovation and respect for intellectual property lies. After all, companies supply artificial intelligence with a huge amount of data every day, among which may well be copyrighted materials. And while disputes and legal nuances have not gone away, many businesses are already actively moving forward by integrating AI into their processes.
And here, of course, Soft2Bet is a great example! The company is actively applying artificial intelligence to create the most personalized user experience and improve customer interaction. This approach not only speeds up internal processes, but also opens up completely new opportunities for optimization and innovation. Are you ready to trust smart algorithms for personalization and customer service?
Modern AI Approaches
Imagine using artificial intelligence to optimize literally every business function, reduce costs, and open the door to a host of innovations. Sounds exciting, doesn’t it? By the way, according to McKinsey, 78% of companies surveyed already use AI in at least one business process, the numbers speak for themselves and show how deeply AI is penetrating our everyday lives. However, it’s no surprise that AI is taking over more and more areas of business!
But what does it look like in practice? Here are some clear and vivid examples:
- Content and media: Streaming platforms, news sites and social networks everywhere are using AI algorithms to make personalized recommendations, thereby improving user experience and driving increased subscriptions and sales. For example, have you ever wondered why Spotify so accurately picks a new album similar to your favorite music? It’s simple: it’s the work of smart algorithms based on your preferences! In turn, companies like Soft2Bet are using AI to create content that not only hooks audiences, but keeps their attention.
- Retail and e-commerce: Online retailers have learned to literally “guess” what customers want, thanks to AI. When your favorite online store offers you a product that perfectly matches your interests, it’s no longer magic, but the result of intelligent systems processing huge amounts of data about your behavior.
- Customer service: AI-powered chatbots are seriously offloading the work of call centers. They answer the most common questions, allowing employees to focus on more complex and important tasks. This approach saves time, reduces stress and dramatically improves customer interactions, positively impacting brand reputation.
These examples clearly demonstrate that artificial intelligence has already become a key element of a successful business. The question is no longer whether or not to use it, the question is now another: how to do it as efficiently and ethically as possible? After all, technology does not stand still, and each innovation opens up new and new opportunities.
Soft2Bet, for example, is actively implementing artificial intelligence to create the most personalized user experience possible, strengthening its market leadership. Now more than ever it is important to understand how exactly to apply such innovative tools to achieve real results and create a sustainable advantage over competitors. You must agree that the future has already arrived, all that’s left is to capitalize on its opportunities!
Ethical Concerns
There is no doubt that artificial intelligence is opening up new horizons for business and accelerating innovation, but behind every technological breakthrough there are also important ethical questions. After all, how often do we wonder: who authorizes AI to learn from copyrighted content? Today, algorithms train on books, web pages, blogs, videos, and a host of other sources – and much of this information is protected by law. Isn’t that alarming?
But that’s not all. The question remains: how exactly will AI use this data? If companies use AI to analyze customer behavior or preferences without their explicit consent, aren’t they risking a privacy breach? Furthermore, who is responsible for the data, and what might happen to it if it falls into the wrong hands?
Gradually, as regulators and governments become accustomed to the widespread use of AI, we are seeing stricter regulations to control the collection, processing and analysis of personal information. Some companies are already committing to implementing ethical practices around AI, becoming more transparent about how they use data. Soft2Bet, for example, is committed to combining innovation with a responsible approach, protecting its customers’ information and adhering to high ethical standards.
How Soft2Bet Cares for Ethics
Soft2Bet successfully integrates AI into its processes, providing a personalized experience that really works. Not only does the company strictly comply with GDPR requirements by allowing users to choose what data to collect, but it also analyzes user behavior in real-time to instantly adjust its offerings. Isn’t it great when every interaction is turned into a unique experience and recommendations are customized to personal preferences and interaction history?
All that said, one-size-fits-all approaches are a thing of the past. Thanks to AI, businesses can process massive amounts of data and turn it into accurate predictions, improving efficiency and customer satisfaction. According to the “Generative AI Market Report 2025-2030” report from IoT Analytics, the generative AI market will exceed $25.6 billion in 2024, clearly demonstrating the rapid growth of this technology. And a McKinsey study shows that 71% of respondents said their organizations regularly use AI in at least one business function, up significantly from previous years. Don’t you think this is the kind of innovation that makes businesses more agile and adaptive? Companies that can not only keep up with changing norms, but also proactively adopt AI to personalize services, are guaranteed to be leaders in the digital age.
Business
Aha moments, the ‘first ten hours’, and other pro tips from business leaders building AI-ready workforces

As businesses face pressure to bring new AI tools on board, they have the dual challenge of effectively incorporating the technology into their operations and of helping their workforce make the best use of the technology.
Longstanding methods for assessing the skills and performance of an employee, as well as hiring practices, are being upended and re-imagined, according to business leaders who spoke at the Fortune Brainstorm Tech conference on Tuesday in Park City, Utah.
Technical skills, contrary to what you might think, are not paramount in the age of AI. In fact, for many employers, technical skills are becoming less important.
“For the first time this summer on our platform we saw a shift,” said Hayden Brown, CEO of Upwork, an online jobs marketplace for freelancers. In the past, when Upwork asked employers on its platform about the most important skills they were hiring for, the answer invariably involved deep expertise in certain technical areas, Brown said. “For the first time this summer, it’s now soft skills. It’s human skills; it’s things like problem solving, judgement, creativity, taste.”
Jim Rowan, the head of AI at consulting firm Deloitte, which sponsored the Brainstorm discussion, said an employee’s “fluency” should not be an end goal in itself. More important is intellectual curiosity around new tools and technology.
And that’s something that needs to start at the top.
“We’ve done a lot of work with executive teams to make sure the top levels of the organization and the boards are actually familiar with AI,” said Rowan. “That helps because then they can communicate better with their teams and see what they’re doing.”
For Toni Vanwinkle, VP of Digital Employee Experience at Adobe, it’s critical for employees at all levels of an organization to have an “aha moment” with AI technology. And the best way to bring that about is for each employee to get their “first ten hours” in.
“Go play with it,” Vanwinkle says. “Sort your email box, take the notes in your meeting, create a marketing campaign, whatever it is that you do.” Through that initial process of personal exploration, you start to understand the potential of the technology, she says.
The next step, Vanwinkle says, is collaboration, discussions, and experimentation among colleagues within the same departments or functionalities.
“This whole spirit of experiment, learn fast. That twitch muscle can turn into something of value when people talk openly,” Vanwinkle says.
The importance of embracing experimentation, and fostering it as a value within the organization, was echoed by Indeed chief information officer Anthony Moisant.
“I think about the pilots we run, most of them fail. And I’m not embarrassed at all to say that,” Moisant says. It all comes down to what a particular organization is optimizing for, and in the case of Indeed, Moisant says, “what we go for is fast twitch muscle. Can we move faster?”
By encouraging more low stakes experiments with AI, companies can gain valuable insights and experience that employees can leverage quickly when it counts. “The only way to move faster is to take a few bets early on, without real long term strategic ROI,” says Moisant.
Workday Vice President of AI Kathy Pham emphasizes that with new tools like AI, getting a full picture of an employee’s value and performance may take a bit longer than some people are used to. “Part of the measurement is better understanding what the return is and over what period of time,” she said.
Business
AI agents in business create new risks & urgent security needs

Radware has released new research analysing the cybersecurity risks linked to the growing adoption of agentic artificial intelligence (AI) systems in enterprise environments.
The report, titled ‘The Internet of Agents: The Next Threat Surface,’ examines how AI agents driven by large language models (LLMs) are being integrated into business operations. These systems differ from standard chatbots by acting autonomously, executing tasks, and collaborating with other digital agents using protocols such as the Model Context Protocol (MCP) and Agent-to-Agent (A2A).
Expanded attack surface
Organisations are increasingly deploying LLM-powered AI agents into customer service, development, and operational workflows. Unlike traditional software, these agents are capable of reasoning, executing commands, and initiating actions autonomously across enterprise networks.
The report notes that as these agents interact with business systems, they establish complex transitive chains of access to sensitive resources. This can complicate tracking and securing business processes with existing cybersecurity measures. According to Radware, these pathways present “complex pathways into sensitive enterprise resources that are difficult to track or secure with existing defences.”
New protocols and exposures
The adoption of protocols such as MCP and A2A enables enhanced interoperability and scalability for AI agents across different business processes, but this also introduces new risks. The report highlights threats such as prompt injection, tool poisoning, lateral compromise, and malicious handshakes, which take advantage of these emerging protocols.
Prompt injection attacks, in particular, are identified as a growing risk. By embedding covert instructions in content like emails or web pages, attackers can manipulate AI agents to exfiltrate data or initiate unauthorised actions – often without any indication to the end user. The research states: “Adversaries can embed hidden instructions in emails, web pages, or documents. When an AI agent processes that content, it may unwittingly exfiltrate data or trigger unauthorized actions – without the user ever clicking a link or approving a request.”
Lower barriers to cybercrime
The report observes that new “dark AI ecosystems” are emerging, which lower the technical barriers to cybercrime. Black-hat platforms such as XanthoroxAI provide access to offensive AI tools that automate previously manual attacks, including malware creation and phishing campaigns. These tools, offered on a subscription basis, enable less-experienced attackers to develop and deploy exploits more easily.
Radware’s analysis also shows that AI accelerates the weaponisation of new vulnerabilities. The report references research demonstrating that GPT-4 can develop working exploits for recently disclosed vulnerabilities more rapidly than experienced human researchers, reducing the window for IT teams to patch vulnerable systems before attackers strike.
Changing digital landscape
The emergence of the so-called ‘Internet of Agents’ is likened to previous digital shifts, such as the rise of the Internet of Things. In this new ecosystem, autonomous digital actors with memory, reasoning, and action capabilities are increasingly interconnected, resulting in greater operational efficiency but also expanded risk exposure.
Radware’s report argues that organisations must adjust their security models to account for the new roles played by AI agents in the enterprise. With these systems acting as decision-makers, intermediaries, and operational partners, the need for effective governance and security oversight is heightened.
“We are not entering an AI future – we are already living in it,” said [Insert Radware Spokesperson]. “The agentic ecosystem is expanding rapidly across industries, but without strong security and oversight, these systems risk becoming conduits for cybercrime. The businesses that succeed will be those capable of delivering trustworthy, secure AI experiences.”
Security recommendations
The report sets out a series of recommendations for enterprises to protect against the unique risks posed by autonomous AI agents. These include:
- Treating LLMs and AI agents as privileged actors, subject to strict governance and access controls.
- Integrating red-teaming and prompt evaluation exercises into software development lifecycles.
- Evaluating protocols such as MCP and A2A as security-critical interfaces, rather than mere productivity tools.
- Monitoring dark AI ecosystems to stay aware of how adversaries are adapting and exploiting new tools.
- Investing in detection, sandboxing, and behavioural monitoring technologies tailored for autonomous AI systems.
- Recognising that AI-powered defensive capabilities will play an increasingly important role in combating AI-driven threats.
The report concludes by noting that AI agents represent a significant technology shift for businesses. Although these systems hold potential for efficiency and economic growth, they also introduce risks that enterprises must urgently address as the boundaries between helpful tools and security threats continue to blur.
Business
San Francisco Is Investigating Scale AI Over Its Labor Practices

The city of San Francisco is investigating Scale AI over its labor practices, a Scale AI spokesperson confirmed to Business Insider.
Scale AI, which is based in San Francisco, relies heavily on a vast army of people it considers contractors to train tech companies’ latest artificial intelligence models. Meta bought almost half of Scale AI for $14 billion in a blockbuster AI deal this summer.
The city’s investigation is being led by its Office of Labor Standards Enforcement (OLSE), which oversees sick leave, minimum wage, overtime pay, and other regulations for San Francisco workers.
Scale AI spokesperson Natalia Montalvo told Business Insider that the startup is cooperating with the OLSE to provide the information they need and that Scale AI is fully compliant with local laws and regulations.
San Francisco’s investigation into Scale AI is limited to city residents who worked for the startup — including remotely — over the last three years, according to a now-deleted notice posted by Maura Prendiville, a compliance officer at the OLSE, in a subreddit for Outlier AI, a gig work platform run by Scale AI.
While the notice didn’t specify what types of labor practices the city is investigating, it did mention that investigators are looking to speak to people who worked for Scale AI as “taskers” and “freelancers” rather than the startup’s full-time employees.
The investigation’s existence doesn’t mean Scale AI has broken the law, and the city could find it in favor of Scale AI — or drop its probe altogether.
The OLSE declined to answer further questions about the probe, citing its policy of not commenting on ongoing investigations. The agency has the authority to levy fines for labor violations.
In the Reddit post, Prendiville specified that San Francisco is seeking to speak to people who worked for Scale AI through Outlier AI and Smart Ecosystem, another Scale AI platform. She also wrote that she seeks to speak with people who worked for Scale AI through HireArt, a third-party hiring agency, and the gig work marketplace Upwork.
Upwork said it has not been contacted by OLSE.
“Worker classification and compliance with labor regulations are ultimately the responsibility of the hiring business,” an Upwork spokesperson said. “As a general matter, Upwork does not play a role in those determinations.”
Montalvo, the Scale AI spokesperson, said that the feedback the company gets from its contributors is “overwhelmingly positive.”
“Our dedicated teams work hard to ensure contributors are paid fairly, feel supported, and can access the flexible earning opportunities they value,” she said.
It’s not the first time Scale AI has been investigated by labor regulators. It was also the subject of a federal Department of Labor investigation that was dropped this summer, TechCrunch reported.
Some Scale AI workers have previously alleged that the company illegally underpaid them, denied them benefits like sick leave, and misclassified them as contractors in two lawsuits filed over the past year in San Francisco’s superior court.
Meta declined to comment. HireArt didn’t respond to a request for comment.
Have a tip? Contact this reporter via email at crollet@insider.com or on Signal and WhatsApp at 628-282-2811. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi