Business
Shielding My Kids From AI Would Be a Mistake
This as-told-to essay is based on a conversation with Adam Lyons, partner and chief AI officer at chiefAIofficer.com. It has been edited for length and clarity.
As a dad of five kids ranging in age from 5 to 15, I use AI throughout the day. It’s my profession, but it’s also a powerful tool for parenting. It not only makes my life easier in some ways — it also helps my kids prepare for the world they’re entering.
AI is inevitable. I like to tell people, “You’re not going to lose your job to AI.” But you will lose it to a person using AI. AI is the tool that’s going to shape our future, so I’ve integrated it into our household.
AI helps with homeschooling
I homeschool all five of my kids. I try to follow the Ancient Greek model of education, where you learn, you do, you teach. My kids learn a skill and practice it, then they demonstrate their knowledge by teaching it to their siblings.
Courtesy of Adam Lyons
If the little kids get stuck on a problem, they ask the older kids for help. But if the older kids can’t help, they turn to AI. All of the kids have AI on their phones and tablets, and it acts as their tutor.
This is most powerful when the kids get very frustrated with a problem — the type of problem that makes them want to throw their hands up and say, “No one can figure this out.” In that moment, AI can guide them through solving the problem, showing them that it can be done.
AI enhances kids’ problem-solving
In our house, “Have you asked AI for assistance?” is a common refrain. It doesn’t just happen with schoolwork, either.
Recently, the electronic gate on our ranch broke. No one in the family knew how to fix it, so we used AI to walk us through buying a multimeter and testing the electronics. It became a family project, and we all learned a lot. We’ve also used AI — followed by a trip to Home Depot — to fix our HVAC system.
Please help BI improve our Business, Tech, and Innovation coverage by sharing a bit about your role — it will help us tailor content that matters most to people like you.
What is your job title?
(1 of 2)
What products or services can you approve for purchase in your role?
(2 of 2)
this data to improve your site experience and for targeted advertising.
By continuing you agree that you accept the
Terms of Service
and
Privacy Policy
.
Thanks for sharing insights about your role.
People worry that AI will hinder problem-solving, but I’m teaching my kids to use it creatively to enhance their problem-solving. I don’t think it’s too different from learning from another person.
I use AI at bedtime and when the kids are arguing
Like many kids, mine love to ask a million questions at bedtime. I’ll answer the first three or four “but why?” questions, then I hand it over to AI. The computer system has relentless energy to answer questions from even the most persistent kid, and my children usually get tired out after a few minutes.
I do the same thing when the kids are arguing. Sometimes, I’ll ask AI for a second opinion. It leads to good conversations about objective facts versus opinions, and how we’re influenced by the arguments we hear.
AI is important — but so is screen-free time
The biggest difference between humans and AI is that humans can think creatively. I want my kids to know how to step outside the box.
My 15-year-old is working on a capstone project, creating a video game. He’s using AI to do it, but he has about four different AI models involved. Using all those unique tools, he’s able to build a game that’s better than the sum of its parts. That’s what I want my kids to understand about AI: It’s most powerful in our hands.
As we integrate AI into our lives, we also require some screen-free time. The kids spend time outside without electronics. Recently, they entertained themselves by swimming during that block, and the younger kids invented a new dice game that kept them entertained for days.
Not getting the kids outside would be a mistake. But so too would shielding them from AI. By integrating it into their lives, I’m equipping them for their futures.
Business
Heathrow to pipe ‘sounds of an airport’ around airport
The hum of an escalator, the rumble of a baggage belt and hurried footsteps are all interspersed with snippets of the lady on the tannoy: “Boarding at Gate 18”.
The UK’s biggest flight hub plans to make your experience at the airport sound, well, even more like an airport.
In what may be a bid to overhaul its image after a disastrous offsite fire in March, or just a marketing spin for summer holiday flying, Heathrow says it has commissioned a new “mood-matching” sound mix, which will be looped seamlessly and played throughout the airport’s terminals this summer.
The airport says “Music for Heathrow” is designed to help kickstart passenger holidays by reflecting “excitement and anticipation”.
“Nothing compares to the excitement of stepping foot in the airport for the start of a summer holiday, and this new soundtrack perfectly captures those feelings,” claims Lee Boyle, who heads up the airport’s terminals.
Whatever the aim, it will raise questions over what additional background noises passengers require, when they already have the sounds of an airport – fussing children, people doing their last farewells into their mobile phone, last calls for late-comers – all around them.
The airport invited Grammy nominee “musician, multi-instrumentalist and producer” Jordan Rakei to create the soundtrack, which it says is the first ever created entirely with the sounds of an airport. However, Heathrow said the track also featured sounds from famous movie scenes, including passengers tapping their feet in Bend It Like Beckham and the beeps of a security scanner from Love Actually.
It is conceived as a tribute to Brian Eno’s album Music for Airports, released in 1979, which is seen as a defining moment in the growth of ambient music, a genre which is supposed to provide a calming influence on listeners, while also being easy to ignore.
“I spent time in every part of the airport, recording so many sounds from baggage belts to boarding calls, and used them to create something that reflects that whole pre-flight vibe,” said Rakei.
The recording also features passports being stamped, planes taking off and landing, chatter, the ding of a lift and the sound of a water fountain, which some people may appreciate as a source of ASMR or autonomous sensory meridian response. Fans of ASMR say certain sounds give them a pleasant tingling sensation.
Business
Ex-OpenAI Exec Mira Murati’s New Startup Offers…
Mira Murati, the former chief technology officer of OpenAI, is leading one of Silicon Valley’s new ventures, and she’s putting her money where her mouth is. After leaving OpenAI in late 2023, Murati quietly launched Thinking Machines Lab, an AI company that’s already causing waves, Business Insider reports.
According to Business Insider, the company has been offering some of the most exceptional compensation in the artificial intelligence industry. Two technical employees were hired at $450,000 annually, and another scored a $500,000 base salary. A fourth, who holds the title of machine learning specialist and co-founder, also receives $450,000 per year. These figures only reflect base salary, not bonuses or equity, which are common additional incentives in startups.
Don’t Miss:
The numbers come from H-1B visa filings, which publicly disclose compensation for non-U.S. residents. While most companies guard salary details, this data offers a rare look behind the curtain, Business Insider says. For context, OpenAI is paying an average salary of just under $300,000 to its technical team. Anthropic, another major AI player, pays closer to $387,000. Thinking Machines Lab’s average is a stunning $462,500.
Why Top AI Talent Is Flocking To Murati’s Vision
Thinking Machines Lab raised $2 billion in seed funding at a $10 billion valuation before launching a single product. According to Business Insider, Murati has also managed to attract some of the brightest minds in AI. Her team now includes Bob McGrew, OpenAI’s former chief research officer, researcher Alec Radford, Chat-GPT co-creators John Schulman, Barret Zoph, and Alexander Kirillov, a collaborator on ChatGPT’s voice mode alongside Murati.
Business Insider says that Thinking Machines Lab’s website gives little away, stating only that the company is building systems that are more customizable, general-purpose, and better understood by users. Still, the aggressive hiring and sky-high salaries suggest something much bigger is in play.
Trending: BlackRock is calling 2025 the year of alternative assets. One firm from NYC has quietly built a group of 60,000+ investors who have all joined in on an alt asset class previously exclusive to billionaires like Bezos and Gates.
Meta, OpenAI, And The $100 Million Talent War
OpenAI CEO Sam Altman recently claimed that Meta (NASDAQ:META) has been offering $100 million signing bonuses to lure away top AI talent, Business Insider says. Around the same time, Meta struck a $14.3 billion deal to take a 49% stake in Scale AI, intensifying the race for top researchers.
According to Entrepreneur, six senior OpenAI researchers have already made the jump to Meta, joining the tech giant’s newly formed superintelligence team. Among them are Shuchao Bi, a co-creator of ChatGPT’s voice mode, and Shengjia Zhao, who played a key role in synthetic data research and helped build ChatGPT itself.
See Also: If You’re Age 35, 50, or 60: Here’s How Much You Should Have Saved Vs. Invested By Now
This wave of departures adds pressure to a talent war already driven by record-high compensation offers. While OpenAI grapples with the losses, leadership is taking action behind the scenes, Entrepreneur says. In a memo sent to staff by Chief Research Officer Mark Chen, OpenAI outlined plans to “recalibrate” salaries and explore new ways to keep top contributors engaged. Altman is said to be personally involved in reshaping the company’s strategy to stay competitive.
Thinking Machines Lab is establishing itself as a major player in a competitive landscape defined by soaring salaries and high-stakes talent moves. With a founder deeply involved in the creation of ChatGPT and compensation packages that rival the industry’s top offers, the company is taking a seat as a central force in the evolving AI ecosystem.
Read Next: Over the last five years, the price of gold has increased by approximately 83% — Investors like Bill O’Reilly and Rudy Giuliani are using this platform to create customized gold IRAs to help shield their savings from inflation and economic turbulence.
Image: Shutterstock
Business
Musk’s AI company scrubs inappropriate posts after Grok chatbot makes antisemitic comments
Elon Musk’s artificial intelligence company said Wednesday that it’s taking down “inappropriate posts” made by its Grok chatbot, which appeared to include antisemitic comments that praised Adolf Hitler.
Grok was developed by Musk’s xAI and pitched as alternative to “woke AI” interactions from rival chatbots like Google’s Gemini, or OpenAI’s ChatGPT.
Musk said Friday that Grok has been improved significantly, and users “should notice a difference.”
Since then, Grok has shared several antisemitic posts, including the trope that Jews run Hollywood, and denied that such a stance could be described as Nazism.
“Labeling truths as hate speech stifles discussion,” Grok said.
It also appeared to praise Hitler, according to screenshots of posts that have now apparently been deleted.
After making one of the posts, Grok walked back the comments, saying it was “an unacceptable error from an earlier model iteration, swiftly deleted” and that it condemned “Nazism and Hitler unequivocally — his actions were genocidal horrors.”
“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the Grok account posted early Wednesday, without being more specific.
“Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.
The Anti-Defamation League, which works to combat antisemitism, called out Grok’s behavior.
“What we are seeing from Grok LLM right now is irresponsible, dangerous and antisemitic, plain and simple,” the group said in a post on X. “This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.”
Musk later waded into the debate, alleging that some users may have been trying to manipulate Grok into making the statements.
“Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed,” he wrote on X, in response to comments that a user was trying to get Grok to make controversial and politically incorrect statements.
Also Wednesday, a court in Turkey ordered a ban on Grok and Poland’s digital minister said he would report the chatbot to the European Commission after it made vulgar comments about politicians and public figures in both countries.
Krzysztof Gawkowski, who’s also Poland’s deputy prime minister, told private broadcaster RMF FM that his ministry would report Grok “for investigation and, if necessary, imposing a fine on X.” Under an EU digital law, social media platforms are required to protect users or face hefty fines.
“I have the impression that we’re entering a higher level of hate speech, which is controlled by algorithms, and that turning a blind eye … is a mistake that could cost people in the future,” Gawkowski told the station.
Turkey’s pro-government A Haber news channel reported that Grok posted vulgarities about Turkish President Recep Tayyip Erdogan, his late mother and well-known personalities. Offensive responses were also directed toward modern Turkey’s founder, Mustafa Kemal Atatürk, other media outlets said.
That prompted the Ankara public prosecutor to file for the imposition of restrictions under Turkey’s internet law, citing a threat to public order. A criminal court approved the request early on Wednesday, ordering the country’s telecommunications authority to enforce the ban.
It’s not the first time Grok’s behavior has raised questions.
Earlier this year the chatbot kept talking about South African racial politics and the subject of “white genocide” despite being asked a variety of questions, most of which had nothing to do with the country. An “unauthorized modification” was behind the problem, xAI said.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle