Tools & Platforms
This smart home tech is another way Apple is falling behind in AI

Amazon, Google, and Samsung are all working on an exciting way to bring AI to smart homes – and Apple risks being left behind.
Samsung is first to launch the new feature: the ability to use natural language to simply tell your smart home app what it is you want it to do …
Samsung Smart Things is effectively the Korean company’s equivalent of HomeKit. All compatible devices can be controlled through a single app on the company’s smartphones, in exactly the same way the Home app can be used on iPhones.
Currently, configuring a new automation in Apple’s Home app isn’t a very user-friendly experience for non-techy users. What Samsung has just announced, and The Verge reports is available now in its app, is a Routine Creation Assistant to automate scene-creation.
This lets you type a phrase describing what you want your home to do in the SmartThings app — like “turn off all the lights whenever I leave the house” — and it will set it up without you needing to configure each device or setting.
While that particular example is easy enough to do in Apple’s Home app, as there’s a specific “when the last person leaves home” trigger, other routines can be trickier for normal people.
For example, I have a timed automation for when I start work. This closes my office blind, switches on lighting to a cool color temperature for concentration, and switches off lights in other rooms.
Configuring this required me to create a scene, add accessories, specify their state, and then create an automation to activate that scene at a certain time on certain days (I do it this way so that I also have the option of manually activating the scene). For someone who isn’t used to the kind of flow and logic involved, creating this kind of thing can definitely be intimidating.
If Samsung’s app lets you create arbitrary automations as easily as telling the AI what you want, that’s a huge step forward in making smart home tech appealing to mass-market consumers.
And it’s not just Samsung: both Amazon and Google are already beta-testing exactly the same type of natural-language functionality. So pretty soon, Apple – once the leader in making smart home tech friendlier – could be the only major platform not to offer this.
Another area where Samsung is pulling ahead is by adding time delays.
Another update to SmartThings routines is the option to schedule multiple timed steps using a Delay Actions feature. For example, Samsung says, “Users can now create a ‘Good Morning’ routine that turns on bedroom lights at 7:00 a.m. [and] starts the coffee maker 15 minutes later.”
I’ve often wanted that ability, for example a goodnight routine that switches on the bedroom lights and turns off the rest, but waits 30 seconds before switching off the hallway lighting to show the way to the bedroom.
Finally, Samsung also lets you opt for a notification you tap to confirm you want something to run, which could be useful where you can anticipate potential clashes between timed automations and manually-activated scenes, depending on things like when people get up in the morning.
Shortcuts would be one way of doing this kind of thing, but that’s a lot clunkier than being able to do everything in one simple app. Apple has some catching up to do here.
Photo by Đức Trịnh on Unsplash
FTC: We use income earning auto affiliate links. More.
Tools & Platforms
How to Prioritize the Ethical, Responsible Use of AI

Though it may seem like AI has been around for years and that we already have a good understanding of its capabilities, the reality is more complex. The security industry has long used AI in the form of video analytics, but other industries are just beginning their AI journeys, enticed by the promise of new efficiencies and advanced capabilities.
Every organization, regardless of industry or customer base, appears to be pursuing AI in some form. But many are still grappling with a fundamental question: What does AI actually do for organizations today? What are the real benefits and, perhaps more importantly, what potential long-term risks are organizations taking on?
In fact, customer concerns are rising. One survey found that 63 percent of customers are concerned about potential ethics issues with AI tools, including bias and discrimination, and more than 75 percent are concerned about AI producing inaccurate or misleading information.
The AI technology sector is still maturing, and that evolution is likely to continue for years to come. But that doesn’t mean organizations should wait on the sidelines for the ethical dust to settle. In fact, now is the time to thoughtfully engage with AI. The priority should be to assess opportunities, evaluate risks and ensure that when AI is used, it is built upon a solid ethical foundation — one that supports responsible innovation and assuages customer concerns. At the same time, the speed of AI development can bring those ethical challenges to the forefront, making it more important than ever to choose the right technology partners to navigate the journey with you.
How to Implement AI Responsibly
- Define clear business use cases.
- Assess risks to operations, compliance and customers.
- Prioritize fairness, transparency and privacy.
- Establish governance and ethical frameworks early.
- Choose technology partners who share your values.
AI Means New Opportunities – and New Risks
One widely accepted truth is that AI has enormous potential to create new business opportunities. With these opportunities come new kinds of risk, however, and organizations must move forward with intention and care.
To tap into AI’s full potential, organizations first need to understand the exact problem they’re trying to solve. Is the goal to optimize workflows through automation? Improve customer service? Enhance data analysis? Once you’ve clearly defined the use case, the next step is to assess what could go wrong. What happens if an AI-automated process fails? How would that impact operations, customers or compliance? Are the risks external, internal or both? By conducting this thorough, nuanced analysis, organizations can make informed decisions about which AI tools to deploy and with which vendors or partners.
A good example of this is facial recognition technology. Although early discussions of facial recognition often centered around ethical concerns, the technology has evolved over time to become a useful and accepted tool when deployed responsibly and in the proper context. This shift didn’t happen by chance — it occurred because developers, regulators, and end-users began to approach it with greater nuance. Privacy laws have also helped to create clear boundaries, and the video surveillance market has shifted to place a greater emphasis on responsible use. Transparency and human oversight are important, and today’s providers increasingly recognize that.
Building on a Regulated and Responsible Foundation
For responsible AI deployment to succeed, it must contain a solid ethical and technological premise. Like the AI technologies themselves, ethical frameworks and regulations represent both an opportunity and a challenge.
The broader conversation around responsible AI is still evolving, and society has yet to reach consensus on what ethical AI should look like. But that doesn’t mean individual organizations can afford to wait. Internal discussions should start now, defining what ethical AI means for your team, what your limits are and how you plan to ensure compliance and transparency.
Ethical challenges range from biased decision-making and unreliable predictions to privacy violations and legal risks. Technologies like facial recognition, behavioral monitoring and predictive analytics can all raise complex questions about consent, data use and fairness. These concerns can’t be fully solved with one regulation or policy. But by facing them head-on, organizations can turn potential pitfalls into opportunities for leadership and innovation.
For instance, AI-enabled facial recognition is becoming more common across the globe, particularly in access control applications. The leaders in this space are those that are communicative and transparent about how these sensitive technologies work and how privacy is protected, with many leaders offering opt-in options for solutions like these to foster trust and maintain ethical technology use.
Organizations that begin considering responsible AI practices early in the development process are better positioned to manage concerns proactively. By aiming to prioritize fairness, transparency and data privacy from the start, rather than reacting after the fact, they create stronger foundations for long-term success. In my own experience, this also lays helpful groundwork for later steps, such as creating governance practices and review boards to address new AI developments.
One example is the introduction of the AI Act in Europe. By jumping on it early and using the Act as a guideline to shape the way forward even before all of the provisions become mandatory, organizations will be better prepared to direct product roadmaps to align with the coming legislation. Additionally, by establishing the framework and positioning early, organizations will rise as proactive AI leaders, allowing them to guide other organizations and customers through what’s poised to come next.
Partnering With Purpose
Once your organization has taken the time to look inward, the next step is to project that clarity outward. Today’s businesses can benefit from having a clear point of view on AI, ideally supported by thoughtful reflection and planning around use cases and ethics. Not every organization needs a fully documented ethical framework, but it’s important to be comfortable discussing the topic with potential partners and customers.
Armed with this, you can evaluate potential partners like developers, integrators and vendors, not only on technological merit but on shared values. If a partner aligns with your stance on ethics, it becomes much easier to build a trusted, long-term relationship.
Transparency is at the heart of this process. Organizations that are open about their AI ethics not only attract better-aligned partners, but they also gain internal and external trust. This isn’t just about compliance. It’s about building credibility, mitigating future issues and fostering innovation on a reliable, values-driven platform. The AI ecosystem is moving fast, but speed doesn’t need to come at the cost of responsibility. In fact, the best organizations will be those that balance both.
Turning Excitement Into Responsible Action
AI continues to develop as a dynamic, evolving field still very much in its hype cycle, creating opportunities for organizations, especially those ready to move quickly and carefully. Organizations shouldn’t be afraid to deploy AI, but they should do so thoughtfully, strategically and ethically. That means knowing your goals, understanding your risks, building a strong internal point-of-view and selecting partners who share your values.
The challenges are real, but so are the opportunities. And for organizations that choose to engage responsibly, AI offers not just a competitive advantage, but a chance to lead the way toward a smarter, more ethical digital future.
Tools & Platforms
Common Pitfalls That Keep Projects From Taking Off

Wrecking Ball Approaching AI
getty
The promise of AI in the world of tax is compelling: streamlined compliance, predictive insights, and newfound efficiency. Yet for all the enthusiasm, many tax departments find their ambitious AI projects grounded before they ever reach cruising altitude. The reasons for this often have less to do with the technology itself and more to do with the realities of data, people, and processes.
Starting Smart, Not Big
The journey from understanding AI concepts to actually implementing them is where the first stumbles often occur. A common misstep is starting too big. Tax leaders sometimes try to redesign entire processes at once, hoping to deliver an end-to-end transformation right out of the gate. The result is usually the opposite: projects drag on, resources are stretched thin, and momentum is lost.
Another common trap is picking the wrong first project, jumping straight into high-stakes initiatives that require heavy integrations, while ignoring smaller wins like data extraction. The safer bet is to start with a narrow, low-risk pilot like automating some spreadsheet workflows. It’s the kind of pilot you can complete in a month or two, and if it doesn’t work out, nothing’s lost and you simply fall back on your manual process.
There’s also a tendency to focus on the tool instead of the outcome. AI gets a lot of attention, and some teams feel pressure to use it even when a simpler automation approach would do the job. The label “AI-powered” shouldn’t matter as much as whether the solution solves the problem effectively.
In short, the common mistakes are clear: trying to boil the ocean, chasing perfection too soon, or letting the hype around AI dictate decisions. The smarter path is to start small and scale thoughtfully from there.
Too Many Projects, Not Enough Progress
With all the buzz around generative AI, many tax teams fall into the trap of running pilot after pilot. For example, a tax team might launch pilots for AI-driven invoice scanning, chatbot support for tax queries, and predictive analytics for audit risks. Each pilot sounds promising, but with limited staff and budget, none of them gets the attention needed to succeed. Six months later, the team has three unfinished projects, no live solution, and a frustrated leadership asking why AI hasn’t delivered. This flurry of activity creates the illusion of progress but results in a trail of half-finished experiments.
This “pilot fatigue” often comes from top-down pressure to be seen as innovating with AI. Leaders want momentum, but without focus, the energy gets diluted. Instead of proving value, the department ends up with scattered efforts and no clear win to point to.
The way forward is prioritization. Not every idea deserves a pilot, and not every pilot should move ahead at the same time. The most successful teams pick a few feasible projects, give them proper resources, and see them through beyond the prototype stage. In the end, it’s better to have one working solution in production than a stack of unfinished experiments.
From Prototype to Production
A common stumbling block for tax teams is underestimating the leap from prototype to production. Some estimates place the AI project failure rate as high as 80%, which is almost double the rate of corporate IT project failures. Building a proof of concept in a few weeks is one thing but turning it into a tool people rely on every day is something else entirely. This is where many AI projects stall and why so many never make it beyond the pilot stage.
The problem usually isn’t the technology itself. It’s the messy reality of moving from a controlled demo into a live environment. A prototype might run smoothly on a clean sample dataset, but in production the AI has to handle the company’s actual data that may be incomplete, inconsistent, or scattered across systems. Cleaning, organizing, and integrating that information is often most of the work, yet it’s rarely factored into early pilots.
Integration poses another challenge. A model that runs neatly in a Jupyter notebook isn’t enough. To be production-ready, it must plug into existing workflows, interact with legacy systems, and be supported with monitoring and error handling. That typically requires a broader team of engineers, operations specialists, even designers. These are roles many tax departments don’t have readily available. Without them, promising pilots get stuck in limbo.
The lesson is simple: tax teams need to plan from day one for data readiness, system integration, and long-term ownership. Without that preparation, pilots risk becoming one-off experiments that never make it past the demo stage.
Building on a Shaky Data Foundation
AI projects succeed or fail on the quality of their data. For tax teams, that’s often the first and toughest hurdle. Information is spread across different systems, stored in inconsistent formats, and sometimes incomplete. In many cases, key details are still buried in PDFs or email threads instead of structured databases. When an AI model has to work with that kind of patchy input, the results are bound to be flawed.
The unglamorous but essential part of AI is cleaning data and building reliable pipelines to feed information into the system. It’s rarely the exciting part, but it’s the foundation and without it, no model will perform consistently in production. The challenge is that, in the middle of all the AI hype, executives are often more willing to fund the “flashy” AI projects than the “boring” data cleanup work that actually makes them possible.
The takeaway is simple: treat data readiness as a core step in your AI journey, not an afterthought. A few weeks spent getting the data right can save months of wasted effort later.
Automating a Broken Process
A common pitfall for tax teams is dropping AI into processes that are already complex or inefficient. Automating a clunky workflow doesn’t fix the problems but it just makes them harder to manage.
AI adoption isn’t about layering a shiny new tool on top of old habits. It’s about rethinking the process as a whole. If AI takes over Task A, then Tasks B and C may need to change too. Reviewing the process upfront makes it easier to spot redundancies and cut steps that no longer add value.
The takeaway is simple: don’t just automate what you already do. Use AI as a chance to simplify and modernize. Otherwise, you risk hard-wiring inefficiency into the future of your operations.
The Trap of 100% Accuracy
Tax professionals are trained to value precision, so it’s no surprise many are reluctant to trust an AI tool unless it delivers flawless answers. The problem is, that bar is unrealistic with generative AI. These systems don’t “know” facts the way a database does. They predict words that are statistically likely to follow each other, which makes them great at generating fluent text but prone to confident-sounding mistakes, often called hallucinations.
Tax leaders need to understand this isn’t a bug that will soon be patched. It’s the nature of how these models work today. That doesn’t mean they’re unusable, but it does mean the goal shouldn’t be perfection. Instead, the focus should be on managing the risks and setting up safeguards that make AI outputs reliable enough for practical use.
On the technical side, tools like retrieval-augmented generation (RAG) can help by grounding AI answers in trusted documents instead of letting the model make things up. On the process side, though, there’s no way around human review. If the output involves regulations, case law, or financial figures, a qualified professional still needs to check it.
The real shift is in how we think about AI. Waiting for a system that’s 100% accurate isn’t realistic. The smarter approach is to design workflows where AI handles the heavy lifting and humans handle the judgment calls. When you set it up that way, AI doesn’t have to be perfect but reliable enough to speed things up without taking control out of human hands.
The Human Side of AI
For all the talk about data and algorithms, one of the biggest obstacles to AI adoption in tax departments may be people. Employees often view new technology as a threat, either to their jobs or to the way they’ve always worked. Fear of being replaced, or simple distrust in an unfamiliar tool, can stall an AI initiative before it even begins.
AI projects are often pitched as a way to save time and reclaim capacity by shifting people from repetitive, low-value tasks to higher-impact “strategic” work. In theory, that sounds ideal. But here’s the reality: not everyone naturally transitions from manual tasks to strategic ones. Can every compliance specialist suddenly become an advisor? Does the company actually need five more people in strategic roles instead of five handling tax filings?
When a department frees up dozens of hours of compliance work, there has to be a clear plan for how that capacity will be redeployed. Without one, employees are more likely to see AI as a threat than as a tool that supports them. For adoption to succeed, teams need to believe the technology will make their work more valuable and not make their roles redundant.
Pragmatism Over Hype
The promise of AI in tax is real but so are the pitfalls. Projects rarely stumble because the technology is broken. They stumble because of human, process, and data challenges that get overlooked.
Starting too big. Spreading resources across too many pilots. Ignoring data quality. Clinging to inefficient processes. Chasing perfection. Failing to bring people along. Any one of these can stall progress.
The way forward isn’t about shiny labels but about small wins that build trust and momentum. And it’s about shifting expectations. For tax departments, success won’t come from doing everything at once. It will come from doing the right things, in the right order, with the right support.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of any organizations with which the author is affiliated.
Tools & Platforms
SMARTSHOOTER Wins Innovation Award for AI-Driven Precision Fire Control Solutions
SMARTSHOOTER won the Innovation Award in the Army Technology Excellence Awards 2025 for its significant advancements in enhancing small arms accuracy and operational effectiveness through the integration of artificial intelligence and modular technology.
The Army Technology Excellence Awards honor the most significant achievements and innovations in the defense industry. Powered by GlobalData’s business intelligence, the Awards recognize the people and companies leading positive change and shaping the industry’s future.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
SMARTSHOOTER’s SMASH fire control technology has been recognized in the Precision Fire Control category reflecting the company’s approach to integrating artificial intelligence (AI), computer vision, and advanced algorithms into compact, scalable fire control systems that address evolving operational challenges for ground forces.
AI-enabled precision enhances small arms accuracy
Hitting moving or distant targets has traditionally relied on a soldier’s skill and experience. SMARTSHOOTER’s SMASH system changes that equation by using real-time image processing and AI-driven tracking. For instance, when troops face fast-moving evasive threats such as small drones (sUAS), SMASH can automatically lock onto the target, calculate ballistic trajectories, and release the shot only when a hit is assured. This improves hit accuracy during intense battle situations and reduces collateral damage.

The technology has proven valuable against aerial threats that are difficult to engage with conventional optics or unaided marksmanship. Field reports from the Israel Defense Forces (IDF) and U.S. military units show that SMASH-equipped rifles have been effective in neutralizing drones that might otherwise evade traditional countermeasures. By transforming standard infantry weapons into precision platforms, SMARTSHOOTER has addressed a critical gap in dismounted force protection.
Modular and scalable solutions for different missions
The SMASH product family is designed to fit a variety of operational needs, designed for seamless integration with existing force structures. The SMASH 2000L and 3000 models mount directly onto standard rifles without adding much weight or bulk, making them practical for soldiers on patrol. For situations where longer range or better situational awareness is needed, the SMASH X4 adds a four-times magnifying optic and a laser rangefinder to its AI targeting features.
SMARTSHOOTER’s SMASH family of fire control systems also includes the SMASH Hopper, a remote weapon station that can be mounted on vehicles, unmanned platforms, or static defensive positions. It connects with external sensors or C4I (Command, Control, Communications, Computers & Intelligence) systems and can be operated via wired or wireless links. This flexibility means units can use SMASH technology in everything from urban patrols to border security while staying connected with larger command networks.

Operational validation and international adoption
SMARTSHOOTER’s technology is deployed across multiple armed forces and its performance demonstrated in live operational environments. The IDF has deployed SMASH systems across all infantry brigades to counter drone and ground threats along sensitive borders and in battle. Similarly, U.S. Army and Marine Corps units have added SMASH to their counter-drone arsenals following rigorous evaluation by organizations such as the Joint Counter-sUAS Office (JCO) and Irregular Warfare Technical Support Directorate (IWTSD).
These deployments are not limited to controlled trials; they reflect ongoing use in active conflict zones where reliability is crucial. Users have reported higher engagement success rates against both aerial and ground targets, even when facing complex threats like drone swarms or armed quadcopters. The awarding of multi-million-dollar contracts by defense agencies further demonstrates confidence in the system’s capabilities.

Beyond Israel and the United States, SMARTSHOOTER’s solutions have been adopted by NATO partners in Europe, including Germany, the UK, and the Netherlands, as well as by security forces in Asia-Pacific. This broad uptake shows that militaries worldwide see value in this approach to modern battlefield challenges.
Human-in-the-Loop targeting supports ethical use of AI
A distinguishing aspect that sets SMARTSHOOTER apart is its focus on keeping humans in control of engagement decisions. While automation helps with tracking and aiming, operators always make the final call. The technology provides visual cues, such as target locks or shot timing indicators, but never fires autonomously.
This approach aligns with evolving international norms regarding responsible use of AI in defense applications. It ensures that accountability remains with trained personnel rather than algorithms alone, a consideration increasingly scrutinized by policymakers and military leaders alike. By embedding these safeguards into its products from inception, SMARTSHOOTER has addressed both operational needs and ethical concerns associated with next-generation fire control systems.

“We are honored to receive this recognition. This achievement reflects the proven value of our SMASH fire control systems and their ability to transform conventional small arms into precision tools against modern threats, including drones. Deployed by leading military forces worldwide, SMASH continues to enhance operational effectiveness at the squad level, and we remain committed to driving innovation that meets the evolving needs of today’s battlefield.”
– Michal Mor, CEO of SMARTSHOOTER
Company Profile
SMARTSHOOTER is a world-class designer, developer, and manufacturer of innovative fire control systems that significantly increase hit accuracy. With a rich record in designing unique solutions for the warfighter, SMARTSHOOTER technology enhances mission effectiveness through the ability to accurately engage and eliminate ground, aerial, static, or moving targets, including drones, during both day and night operations.
Designed to help military and law enforcement professionals swiftly and accurately neutralize their targets, the company’s combat-proven SMASH Family of Fire Control Systems increases assault rifle lethality while keeping friendly forces safe and reducing collateral damage. The company’s experienced engineers combine electro-optics, computer vision technologies, real-time embedded software, ergonomics, and system engineering to provide cost-effective, easy-to-use solutions for modern conflicts.
Fielded and operational by forces in the US, UK, Israel, NATO countries, and others, SMARTSHOOTER’s SMASH family of solutions provides end-users with a precise hit capability across multiple mission areas, creating a significant advantage for the infantry soldier and ultimately revolutionizing the world of small arms and optics.
SMARTSHOOTER’s headquarters are based in Yagur, Israel. The company has subsidiary companies in Europe, the US, and Australia.
Contact Details
E-mail: info@smart-shooter.com
Links
Website: www.SMART-SHOOTER.com
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries