Business
Matt Easton reveals how to beat Apple’s AI call screening and win more sales

As Apple rolls out its new AI-based call screening features, similar to but more advanced than those already offered by Android and Google devices, sales teams may find it harder than ever to get their calls answered. However, on today’s episode of The Small Business Show, Matt Easton, sales trainer and founder of Easton University, asserts that this technology doesn’t have to be a roadblock. In fact, it can be a powerful tool for sellers who know how to utilize it effectively.
Easton begins the conversation by explaining how the new Apple feature enables bi-directional conversations before anyone answers the call. Instead of just showing the caller’s name, Apple now displays both the caller’s name and the reason for the call. This gives recipients the opportunity to accept or decline, and even if the recipient initially ignores the call, the feature allows the caller to send a second and even a third message. Easton emphasized that this presents three key opportunities for salespeople to stand out if they use the right language.
“Most people will fail here by saying things like ‘just calling to follow up’ or ‘seeing if you had time for a quick chat,’” Easton warned. Instead, he offered tailored scripts for each of the four essential types of sales calls:
- True cold call
Use a hook: “Hey Jim, Matt Easton with Easton Ford. There’s something every late-model Explorer owner is dealing with, and nobody’s talking about it. Just wanted to make sure you were aware.” - Lead follow-up
Skip “checking in.” Instead, say: “Matt Easton with Friendly Ford. Wanted to personally make sure my team got you all the info you needed.” - Next-step follow-up
For advancing the sale: “Matt Easton with Easton University. Calling to see if it makes sense for you to come by today.” - Follow-through prompt
When a customer hasn’t taken an expected action: “Matt Easton with Friendly Ford. Wanted to make sure you had the link to complete the credit application.”
Each of these openings is designed to be clear, purposeful, and easy to engage with, increasing the odds that the recipient will take the call or at least reply.
“If you do this the right way, your sales will go through the roof.”
Easton emphasized that skillful cold calling is more important than ever, as many businesses are abandoning the phone in favor of spammy emails and text blasts. He adds that most consumers are starved for real human contact and will be relieved to speak with someone who isn’t a bot.
To help sales professionals adapt, Easton offered viewers free access to his $300 “Closing Mini Masterclass” by texting or calling his direct number (720-660-3202).
“This is not busywork,” Easton concluded. “If you’re making calls with skill and confidence, people will respect it… and you’ll get the sale.”
ASBN, from startup to success, we are your go-to resource for small business news, expert advice, information, and event coverage.
While you’re here, don’t forget to subscribe to our email newsletter for all the latest business news know-how from ASBN.
Business
The AI Movie Factory Is Ramping Up

“Because I know the rooster.”
Those were the words of a Baghdad-based director named Hasan Hadi when asked how he was able to corral not just a host of non-actor children for his new movie but a particular kind of junglefowl.
Hadi – his The President’s Cake will come out this fall from Sony Pictures Classics and was just chosen as the official Iraqi Oscar submission – made the comment to a pair of reporters at a dinner at the Toronto International Film Festival. While among the more colorful – and barnyardy – of the remarks uttered at the important early-September gathering, it was far from the only one emphasizing the uniquely human qualities of filmmaking.
Across the Canadian city, directors made statements that, as the algorithm rises, almost take on a political cast. Richard Linklater and Ethan Hawke stood in front of an audience and described the painstaking rehearsal for their movie about Lorenz Hart. (“Ethan and I have done our share of dialogue-ntensive movies,” Linklater said, “but this was something else.”) Nia DaCosta talked about how her feelings on Ibsen animated her need to redo Hedda Gabler. Paul Greengrass left audiences breathless with his latest neo-verite adventure that has Matthew McConaughey as an embattled bus driver saving children in the 2018 Paradise wildfires.
None of them mentioned AI explicitly. They didn’t have to. Their pro-human vehemence was evident in every quote and frame.
But a different vision of Hollywood was also playing out at the industry’s big convocation, as tech entrepreneurs pitched their own vision to the entertainment decisionmakers. People from Largo, which builds models to test movies using virtual audiences. Luma AI, whose executives think studios can deploy their video-generation tool to ramp up production (and ramp down sets). Genny, which uses Google’s VEO-3 to help documentarians create re-enactment footage with the push of a button. All of them were at TIFF too, trying to enact their own vision of the entertainment future. And while they rarely crossed paths with the humanists, they clashed with them ideologically just the same. Hollywood may only be big enough for one them.
Pull the camera back and you’ll suddenly see the same battle playing out everywhere, in boardrooms and courtrooms. Warner Bros. has just sued Midjourney, making similar allegations as Disney and Universal before it against the image-generation startup. Anthropic has just agreed to settle with three authors who sued the AI company for training its models on their books. If the settlement is approved, it could result in the company paying a total of $1.5 billion to hundreds of thousands of authors – but the judge in the case also cleared the way for tech companies to engage in such training without permission so long as they bought retail copies of the books.
Seeking to convey the stakes, two activists, Guido Reichstadter and Michael Trazzi, have gone on hunger strikes outside the San Francisco office of Anthropic and London office of Google’s DeepMind respectively. They say they won’t eat any food until the companies stop developing all new AI models, giving both a visual and historical dimension to the conflict.
Meanwhile, the startup Showrunner, with investment from Amazon, made waves when it said it would use AI for an internal experiment to restore some 43 minutes of lost footage from Orson Welles’ The Magnificent Ambersons. The announcement generated a backlash from the company managing Welles’ estate, which an official there calling the move a “purely mechanical exercise” that lacked “uniquely innovative thinking.”
And of course The Sphere just opened an AI-enabled re-formatted The Wizard of Oz, aided by Google and $80 million (a budget $15 million higher than the original’s in 2025 dollars). While eliciting rave reviews, the project also added in cameos for the CEOs David Zaslav and James Dolan who were not, according to most film historians, present on the 1939 MGM set.
After years of companies building tech and raising money, the introduction of AI into the house of storytelling is finally here. And media players need to decide whether they want to make up the guest bedroom.
It would also be a mistake to think AI will only be used on classic films – on films with few stakeholders. The tools pitched and implemented would be used to create what was once done by hand on sets and in marketing departments, automating the analogue, with all the labor and cultural consequences to go with it.
At a hearing for the Anthropic settlement, one of the author plaintiffs, Kirk Wallace Johnson, said he saw the proceeding as the “beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of AI.” Johnson is the author of The Feather Thief, a critically acclaimed 2018 true-crime book about a heist that made off with scores of centuries-old historical bird skins. You could say that he, too, knows the rooster.
This story appeared in the Sept. 10 issue of The Hollywood Reporter magazine. Click here to subscribe.
Business
How To Un-Botch Predictive AI: Business Metrics

Data scientists consider business metrics more important than technical metrics – yet in practice they focus more on technical ones. This derails most projects. So, why?
Eric Siegel
Predictive AI offers tremendous potential – but it has a notoriously poor track record. Outside Big Tech and a handful of other leading companies, most initiatives fail to deploy, never realizing value. Why? Data professionals aren’t equipped to sell deployment to the business. The technical performance metrics they typically report on do not align with business goals – and mean nothing to decision makers.
For stakeholders and data scientists alike to plan, sell and greenlight predictive AI deployment, they must establish and maximize the value of each machine learning model in terms of business outcomes like profit, savings – or any KPI. Only by measuring value can the project actually pursue value. And only by getting business and data professionals onto the same value-oriented page can the initiative move forward and deploy.
Why Business Metrics Are So Rare for AI Projects
Given their importance, why are business metrics so rare? Research has shown that data scientists know better, but generally don’t abide: They rank business metrics as most important, but in practice focus more on technical metrics. Why do they usually skip past such a critical step – calculating the potential business value – much to the demise of their own projects?
That’s a damn good question.
The industry isn’t stuck in this rut for only psychological and cultural reasons – although those are contributing factors. After all, it’s gauche and so “on the nose” to talk money. Data professions feel compelled to stick with the traditional technical metrics that exercise and demonstrate their expertise. It’s not only that this makes them sound smarter – with jargon being a common way for any field to defend its own existence and salaries. There’s also a common but misguided belief that non-quants are incapable of truly understanding quantitative reports of predictive performance and would only be misled by reports meant to speak in their straightforward business language.
But if those were the only reasons, the “cultural inertia” would have succumbed years ago, given the enormous business win when ML models do successfully deploy.
The Credibility Challenge: Business Assumptions
Instead, the biggest reason is this: Any forecast of business value faces a credibility question because it must be based on certain assumptions. Estimating the value that a model would capture in deployment isn’t enough. The calculation has still got to prove its trustworthiness, because it depends on business factors that are subject to change or uncertainty, such as:
- The monetary loss for each false positive, such as when a model flags a legitimate transaction as fraudulent. With credit card transactions, for example, this can cost around $100.
- The monetary loss for each false negative, such as when a model fails to flag a fraudulent transaction. With credit card transactions, for example, this can cost the amount of the transaction.
- Factors that influence the above two costs. For example, with credit card fraud detection, the cost for each undetected fraudulent transaction might be lessened if the bank has fraud insurance or if the bank’s enforcement activities recoup some fraud losses downstream. In that case, the cost of each FN might be only 80% or 90% of the transaction size. That percentage has wiggle room when estimating a model’s deployed value.
- The decision boundary, that is, the percentage of cases to be targeted. For example, should the top 1.5% transactions that the model considers most likely to be fraudulent be blocked, or the top 2.5%? That percentage is the decision boundary (which in turn determines the decision threshold). Although this setting tends to receive little attention, it often makes a greater impact on project value than improvements to the model or data. Its setting is a business decision driven by business stakeholders, representing a fundamental that defines precisely how a model will be used in deployment. By turning this knob, the business can strike a balance in the tradeoff between a model’s primary bottom-line/monetary value and the number of false positives and false negatives, as well as other KPIs.
Establishing The Credibility of Forecasts Despite Uncertainty
The next step is to make an existential decision: Do you avoid forecasting the business value of ML value altogether? This would prevent the opening of a can of worms. Or do you recognize ML valuation as a challenge that must be addressed, given the dire need to calculate the potential upside of ML deployment in order to achieve it? If it isn’t already obvious, my vote is for the latter.
To address this credibility question and establish trust, the impact of uncertainty must be accounted for. Try out different values at the extreme ends of the uncertainty range. Interact in that way with the data and the reports. Find out how much the uncertainty matters and whether it must somehow be narrowed in order to establish a clear case for deployment. Only with insight and intuition into how much of a difference these factors make can your project establish a credible forecast of its potential business value – and thereby reliably achieve deployment.
Business
Leadership Shakeup Hits the XAI Team Training Grok

A leadership shakeup is brewing within xAI’s data annotation team.
At least nine high-level employees appear to no longer be with the team. Their Slack accounts were deactivated over the weekend, according to screenshots seen by Business Insider.
The employees worked on the human data management team, which oversees the AI tutors who train Grok. Previously, the managerial ranks included around a dozen people, according to a review of LinkedIn profiles and workers with knowledge of the team. Several of the managers had posted Slack messages as recently as September 5.
One of the deactivated accounts was a member of the technical staff who oversaw the team’s managers, according to a screenshot seen by Business Insider. The employee had previously worked on Tesla’s Autopilot data annotation team.
A representative for xAI did not respond to a request for comment.
The data annotation team consists of more than 1,500 contract and full-time staff, a large portion of whom are AI tutors, according to a tally of the company’s internal Slack.
The company has also been scheduling one-on-one meetings with some employees, four workers said. In the meetings, workers have been asked to present their work and how they’ve added value to the company, the people said.
One worker told Business Insider that the meetings have created a “sense of panic.”
Unlike most AI companies, which often use third-party employment agencies, xAI hires many of its US tutors directly. This can provide the company with more control and privacy; it can also be more expensive.
XAI briefly outsourced some of its work to Scale AI, but ended the partnership earlier this year, Business Insider previously reported. XAI is also working with Mercor, a recruiting and data annotation services company. Mercor confirmed the arrangement but declined to comment.
The AI tutors play a crucial role in developing the company’s chatbot. They label, categorize, and contextualize raw data to teach Grok how to better understand the world. Though the team works in tandem with engineers, it uses a separate Slack, called TeachX, and has different email domains. Within TeachX, AI tutors are broken into specialized groups, including science, coding, and translation.
Earlier this year, the company planned to hire thousands of AI tutors, Business Insider previously reported. Since February, xAI has added around 700 workers to the data annotation team, according to a tally in the company’s Slack.
Musk’s AI company has nine roles listed for the data annotation team, including six AI tutor roles, with pay ranges between $35 and $80 per hour.
Do you work for xAI or have a tip? Contact this reporter via email at gkay@businessinsider.com or Signal at 248-894-6012. Use a personal email address, a nonwork device, and nonwork WiFi; here’s our guide to sharing information securely.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi