Business
AI/R Company Launches Synsig, a Business Unit Specialized in ServiceNow

SAN FRANCISCO, July 30, 2025 (GLOBE NEWSWIRE) — AI Revolution Company (AI/R), a global leader in Artificial Intelligence (AI) transformation services, has announced the launch of Synsig, a brand dedicated exclusively to the implementation of Digital Platforms and Agentic AI using ServiceNow products.
ServiceNow is a globally recognized cloud-based software platform that offers solutions for workflow automation and service management in companies across various sectors, such as IT, HR, and customer service, enabling process optimization and a better experience for both customers and employees.
With a well-established methodology in implementing solutions for industries, Synsig enters the market with a unique approach: bringing digital transformation to strategic areas, with specialized solutions in CRM, HR, AI agents, and IT services. “Synsig emerges at a strategic moment for ServiceNow, which earlier this year officially entered the CRM space. We have extensive expertise in the segment and a base of over 700 active clients, which enables us to pursue an effective cross-selling strategy and increase market penetration, with a portfolio carefully designed to drive digital transformation and help companies move beyond the BAU (Business As Usual) mindset,” says Rodrigo Rosa, Head of Sales and Business at Synsig.
Another key differentiator brought by Synsig includes a team of certified and highly skilled specialists, combining industry-leading ServiceNow capabilities with deep expertise in AI, data, and platform engineering. “Synsig is born with a culture rooted in ‘Human-AI Engineering,’ fostered by AI/R Company, one of the world’s most recognized firms for boosting human talent exponentially through AI platforms and tools,” highlights Alexis Rockenbach, Global CEO of AI/R Company. Driven by this culture, the new Synsig brand aims to deliver efficiency and scale the management of corporate and IT services, fueling continuous innovation across businesses.
About Synsig
Synsig is a global strategic consultancy specialized in ServiceNow that helps companies accelerate digital transformation through intelligent automation, enabling greater operational efficiency and improved business performance. As the ServiceNow powerhouse of the AI/R group, Synsig brings together advanced AI, data, and platform expertise to unlock smarter workflows and drive continuous innovation. With a unique, integrated approach, Synsig builds smart connections-linking people, processes, and data like synapses-creating seamless operations and faster outcomes, connecting intelligence to performance.
About AI/R
AI/R, headquartered in California, is an Agentic AI Software Engineering company that combines its ecosystem of highly specialized technology brands, proprietary AI platforms, and strategic partner platforms to amplify human intelligence and drive a revolution across industries, setting efficient standards for innovation and business productivity. By embedding AI into every aspect of its operations, AI/R’s mission is to make the AI revolution a revolution for everyone, empowering human talent while raising the bar for digital transformation. Let’s breathe in the future.
Milena Buarque Lopes Bandeira
Business
Trump asks US Supreme Court to uphold his tariffs after lower court defeat

President Donald Trump has asked the US Supreme Court to overturn a lower court decision that found many of his sweeping tariffs were illegal.
In a petition filed late on Wednesday, the administration asked the justices to quickly intervene to rule that the president has the power to impose such import taxes on foreign nations.
A divided US Court of Appeals for the Federal Circuit last week ruled 7-4 that the tariffs Trump brought in through an emergency economic powers act did not fall within the president’s mandate and that setting levies was “a core Congressional power”.
The case could upend Trump’s economic and foreign policy agenda and force the US to refund billions in tariffs.
Trump had justified the tariffs under the International Emergency Economic Powers Act (IEEPA), which gives the president the power to act against “unusual and extraordinary” threats.
In April, Trump declared an economic emergency, arguing that a trade imbalance had undermined domestic manufacturing and was harmful to national security.
While the appellate court ruled against the president, it postponed its decision from taking effect, allowing the Trump administration time to file an appeal.
In Wednesday’s night’s filing, Solicitor General John Sauer wrote that the lower court’s “erroneous decision has disrupted highly impactful, sensitive, ongoing diplomatic trade negotiations, and cast a pall of legal uncertainty over the President’s efforts to protect our country by preventing an unprecedented economic and foreign policy crisis”.
If the Supreme Court justices deny the review, the ruling could take effect on 14 October.
In May, the New York-based Court of International Trade declared the tariffs were unlawful. That decision was also put on hold during the appeal process.
The rulings came in response to lawsuits filed by small businesses and a coalition of US states opposing the tariffs.
In April, Trump signed executive orders imposing a baseline 10% tariff as well as “reciprocal” tariffs intended to correct trade imbalances on more than 90 countries.
In addition to those tariffs, the appellate court ruling also strikes down levies on Canada, Mexico and China, which Trump argues are necessary to stop the importation of drugs.
The decision does not apply to some other US duties, like those imposed on steel and aluminium, which were brought in under a different presidential authority.
Business
Google told to pay $425m in privacy lawsuit

A US federal court has told Google to pay $425m (£316.3m) for breaching users’ privacy by collecting data from millions of users even after they had turned off a tracking feature in their Google accounts.
The verdict comes after a group of users brought the case claiming Google accessed users’ mobile devices to collect, save and use their data, in violation of privacy assurances in its Web & App Activity setting.
They had been seeking more than $31bn in damages.
“This decision misunderstands how our products work, and we will appeal it. Our privacy tools give people control over their data, and when they turn off personalisation, we honour that choice,” a Google spokesperson told the BBC.
The jury in the case found the internet search giant liable to two of three claims of privacy violations but said the firm had not acted with malice.
The class action lawsuit, covering about 98 million Google users and 174 million devices, was filed in July 2020.
Google says that when users turn off Web & App Activity in their account, businesses using Google Analytics may still collect data about their use of sites and apps but that this information does not identify individual users and respects their privacy choices.
Separately this week, shares in Google’s parent company Alphabet jumped by more than 9% on Wednesday after a US federal judge ruled that it would not have to sell its Chrome web browser but must share information with competitors.
The remedies decided by District Judge Amit Mehta emerged after a years-long court battle over Google’s dominance in online search.
The case centred on Google’s position as the default search engine on a range of its own products such as Android and Chrome as well as others made by the likes of Apple.
The US Department of Justice had demanded that Google sell Chrome – Tuesday’s decision means the tech giant can keep it but it will be barred from having exclusive contracts and must share search data with rivals.
Business
AI FOMO, Shadow AI, and Other Business Problems

I have been encountering some interesting news about how the AI industry is progressing. It feels like a slowdown in this space is definitely on the horizon, if it hasn’t already started. (Not being an economist, I won’t say bubble, but there are lots of opinions out there.) GPT-5 came out last month and disappointed everyone, apparently even OpenAI executives. Meta made a very sudden pivot and is reorganizing its entire AI function, ceasing all hiring, immediately after putting apparently unlimited funds into recruiting and wooing talent in the space. Microsoft appears to be slowing their investment in AI hardware (paywall).
This isn’t to say that any of the major players are going to stop investing in AI, of course. The technology isn’t demonstrating spectacular results or approaching anything even remotely like AGI, which many analysts and writers (including me) had predicted it wouldn’t, but there’s still a level of utilization among businesses and individuals that is persisting, so there’s some incentive to keep pushing forward.
The 5% Success Rate
In this vein, I read the new report from MIT about AI in business with great interest this week. I recommend it to anyone who’s looking for actual information about how AI adoption is going from regular workers as well as the C-suite. The report has some headline takeaways, including an assertion that only 5% of AI initiatives in the business setting generate meaningful value, which I can certainly believe. (Also, AI is not actually taking people’s jobs in most industries, and in several industries AI isn’t having much of an impact at all.) A lot of businesses, it seems, have dived into adopting AI without having a strategic plan for what it’s supposed to do, and how that adoption will actually help them achieve their objectives.
I see this a lot, actually — executives who are significantly separated from the day to day work of their organization being gripped by FOMO about AI, deciding AI must become part of their business, but not stepping back and considering how this fits in with the business they already have and the work they already do.
Screwdriver or Magic Wand?
Regular readers will know I’m not arguing AI can’t or shouldn’t be used when it can serve a purpose, of course. Far from it! I build AI-based solutions to business problems at my own organization every day. However, I firmly believe AI is a tool, not magic. It gives us ways to do tasks that are infeasible for human workers and can accelerate the speed of tasks we would otherwise have to do manually. It can make information clearer and help us better understand lengthy documents and texts.
What it doesn’t do, however, is make business success by itself. In order to be part of the 5% and not the 95%, any application of AI needs to be founded on strategic thinking and planning, and most importantly clear-eyed expectations about what AI is capable of and what it isn’t. Small projects that improve particular processes can have huge returns, without having to bet on a massive upheaval or “revolutionizing” of the business, even though they aren’t as glamorous or headline-producing as the hype. The MIT report discusses how vast numbers of projects start as pilots or experimentation but don’t actually come to fruition in production, and I would argue that a lot of this is because either the planning or the clear-eyed expectations were not present.
The authors spend a significant amount of time noting that many AI tools are regarded as inflexible and/or incompatible with existing processes, resulting in failure to adopt among the rank and file. If you build or buy an AI solution that can’t work with your business as it exists today, you’re throwing away your money. Either the solution should have been designed with your business in mind and it wasn’t, meaning a failure of strategic planning, or it can’t be flexible or compatible in the way you need, and AI simply wasn’t the right solution in the first place.
Trading Security for Versatility
On the subject of flexibility, I had an additional thought as I was reading. The MIT authors emphasize that the internal tools that companies offer their teams often “don’t work” in one way or another, but but in reality a lot of the rigidity and limits placed on in-house LLM tools are because of safety and risk prevention. Developers don’t built non-functional tools on purpose, but they have limitations and requirements to comply with. In short, there’s a tradeoff here we can’t avoid: When your LLM is extremely open and has few or no guardrails, it’s going to feel like it lets the user do more, or will answer more questions, because it does just that. But it does that at a significant possible cost, potentially liability, giving false or inappropriate information, or worse.
Of course, regular users are likely not thinking about this angle when they pull up the ChatGPT app on their phone with their personal account during the work day, they’re just trying to get their jobs done. InfoSec communities are rightly alarmed by this kind of thing, which some circles are calling “Shadow AI” instead of shadow IT. The risks from this behavior can be catastrophic — proprietary company data being handed over to an AI solution freely, without oversight, to say nothing of how the output may be used in the company. This problem is really, really hard to solve. Employee education, at all levels of the organization, is an obvious step, but some degree of this shadow AI is likely to persist, and security teams are struggling with this as we speak.
Conclusion
I think this leaves us in an interesting moment. I believe the winners in the AI rat race are going to be those who were thoughtful and careful, applying AI solutions conservatively, and not trying to upturn their model of success that’s worked up to now to chase a new shiny thing. A slow and steady approach can help hedge against risks, including customer backlash against AI, as well as many others.
Before I close, I just want to remind everyone that these attempts to build the equivalent of a palace when a condo would do fine have tangible consequences. We know that Elon Musk is polluting the Memphis suburbs with impunity by running illegal gas generator powered data centers. Data centers are taking up double-digit percentages of all power generated in some US states. Water supplies are being exhausted or polluted by these same data centers that serve AI applications to users. Let’s remember that the choices we make are not abstract, and be conscientious about when we use AI and why. The 95% of failed AI projects weren’t just expensive in terms of time and money spent by businesses — they cost us all something.
Read more of my work at www.stephaniekirmer.com.
Further Reading
https://garymarcus.substack.com/p/gpt-5-overdue-overhyped-and-underwhelming
https://fortune.com/2025/08/18/sam-altman-openai-chatgpt5-launch-data-centers-investments
https://www.theinformation.com/articles/microsoft-scales-back-ambitions-ai-chips-overcome-delays
https://builtin.com/artificial-intelligence/meta-superintelligence-reorg
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
https://www.ibm.com/think/topics/shadow-ai
https://futurism.com/elon-musk-memphis-illegal-generators
https://www.visualcapitalist.com/mapped-data-center-electricity-consumption-by-state
https://www.eesi.org/articles/view/data-centers-and-water-consumption
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions