AI Insights
An action plan to keep organizations safe with artificial intelligence Government directives aren’t enough to ensure security with AI

The White House recently published an “AI Action Plan,” full of recommended policy actions aimed at making effective use of artificial intelligence in industry and government. It comes on the heels of the EU AI Act, a comprehensive legal framework that regulates various facets of AI within the European Union, with enforcement of most provisions slated to begin in August 2026.
From a security perspective, organizations of all stripes would do well to remember that security has always involved a shared responsibility model. Cloud providers make that abundantly clear in their agreements, but it applies across the board, for on-premises systems as well. Each vendor and end user organization must take responsibility for some aspects of security. It’s no different with AI.
So, while security and IT professionals can and should pay attention to government laws and directives, they should also be aware that whatever any government produces will, by its nature, lag the reality on the ground, often by years. Such directives are based on yesterday’s threats and technology.
It’s difficult to think of a technology that has evolved faster than AI is moving right now. New developments arise seemingly by the day, and with them, new security threats. Following are some words of advice to help you keep up.
Pay attention, question everything, put up guardrails
First, pay attention – to emerging laws like the EU AI Act and whatever may come from the U.S. AI Action Plan, but also to your users and AI technology itself. Dig deep into how your employees are actually employing AI, the challenges they’re having, and the opportunities it offers. Consider what dangers it may present if things go awry or that bad actors, whether internal or external, may try to inject. Keep up to speed with how AI is evolving. Yes, that may be a full-time job in itself, but if you stay tuned in and connected, you can pick up on the big developments.
Next, question everything. Insist on explainability with all AI applications. Only by understanding how AI works can you begin to ensure that you can root out bias, privacy violations, and other misuses of data. You also need to ensure your AI is resistant to attacks, including data poisoning, by insisting on quality data standards, and protect against unacceptable risk, such as by insisting on human judgment when warranted.
You’ll also need guardrails around your AI applications, especially as agentic AI begins to take hold. If AI systems are going to be trusted to make decisions on their own, you must treat them like any other user, subject to appropriate access controls. In short, zero trust applies to AI applications just as it does to other users.
Collaborate to keep up
If all this sounds like a lot of work, know you’re not alone. Collaborate with your peers. Join industry user groups to stay informed and learn best practices. Collaborate, too, with industry groups like the InfraGard National Members Alliance (INMA), the private sector component of the FBI’s InfraGard program. INMA is focused on educational programs, training events, and information-sharing initiatives.
While there’s no question AI presents numerous security challenges, it’s not like we haven’t seen this before. Many will recall the angst over the EU General Data Protection Regulation and concern over how difficult it would be to comply with. GDPR did force change, but organizations weathered the storm and now we’ve seen U.S. states adopt many of the same tenets. Expect the same with AI, but don’t wait for government to force your hand.
Read more of the latest thinking about the biggest IT topics of the day at the Palo Alto Networks Perspectives page.
AI Insights
Here’s what parents need to know about artificial intelligence

ChatGPT, AI chatbots, and the growing world of artificial intelligence: it’s another conversation parents may not have planned on having with their kids.
A new Harvard study found that half of all young adults have already used AI, and younger kids are quickly joining in.
Karl Ernsberger, a former high school teacher turned AI entrepreneur, says that’s not necessarily a bad thing.
“It is here to stay. It’s like people trying to resist the Industrial Revolution,” Ernsberger said.
Ernsberger believes tools like chatbots can be powerful for learning, but only if kids and parents know the limits.
One example is “Rudi the Red Panda,” a virtual character available for free in kids mode on X’s Grok AI. When asked, Rudi can even answer questions about Arizona history.
GROK
“The five C’s of Arizona are Copper, Cotton, Cattle, Citrus, Climate,” Rudi said.
But Ernsberger warns that children may struggle to understand that Rudi isn’t real, and that “friendship” with a chatbot is different from human connection.
“It’s hard for the student to actually develop a real friendship,” he said. “They get confused by that because friendship is something they continue to learn about as they get older.”
When asked if Rudi was really my best friend, it replied: “I’m as real as a red panda can be in your imagination. I’m here to be your best friend.”
That, Ernsberger says, is where parents need to step in.
For families trying to keep kids safe while exploring AI, Ernsberger’s first recommendation is simple.
“Use it yourself. There are so many use cases, so many different things that can be done with AI. Just finding a familiarity with it can help you find the weaknesses for your case, and its weaknesses for your kids.”
Then he says if your child is using AI, be there with them to watch over and keep the human connection.
“The key thing with AI is it’s challenging our ability to connect with each other, that’s a different kind of challenge to society than any other tool we’ve built in the past,” Ernsberger said.
Regulators are paying attention, too.
Arizona Attorney General Kris Mayes, along with 43 other state attorneys general, recently sent a letter to 12 AI companies, including the maker of Rudi, demanding stronger safeguards to protect young users.
AI Insights
This MOSI exhibit will give you a hands-on look at artificial intelligence – Tampa Bay Times
AI Insights
Spain Leads Europe in Adopting AI for Vacation Planning, Study Shows

Spain records higher adoption of Artificial Intelligence – AI in vacation planning than the European average, according to the 2025 Europ Assistance-Ipsos barometer.
The study finds that 20% of Spanish travelers have used AI-based tools to organize or book their holidays, compared with 16% across Europe.
The research highlights Spain as one of the leading countries in integrating digital tools into travel planning. AI applications are most commonly used for accommodation searches, destination information, and itinerary planning, indicating a shift in how tourists prepare for trips.
Growing Use of AI in Travel
According to the survey, 48% of Spanish travelers using AI rely on it for accommodation recommendations, while 47% use it for information about destinations. Another 37% turn to AI tools for help creating itineraries. The technology is also used for finding activities (33%) and booking platform recommendations (26%).
Looking ahead, the interest in AI continues to grow. The report shows that 26% of Spanish respondents plan to use AI in future travel planning, compared with 21% of Europeans overall. However, 39% of Spanish participants remain undecided about whether they will adopt such tools.
Comparison with European Trends
The survey indicates that Spanish travelers are more proactive than the European average in experimenting with AI for holidays. While adoption is not yet universal, Spain’s figures consistently exceed continental averages, underscoring the country’s readiness to embrace new technologies in tourism.
In Europe as a whole, AI is beginning to make inroads into vacation planning but at a slower pace. The 2025 Europ Assistance-Ipsos barometer suggests that cultural attitudes and awareness of technological solutions may play a role in shaping adoption levels across different countries.
Changing Travel Behaviors
The findings suggest a gradual transformation in how trips are organized. Traditional methods such as guidebooks and personal recommendations are being complemented—and in some cases replaced—by AI-driven suggestions. From streamlining searches for accommodation to tailoring activity options, digital tools are expanding their influence on the traveler experience.
While Spain shows higher-than-average adoption rates, the survey also reflects caution. A significant portion of travelers remain unsure about whether they will use AI in the future, highlighting that trust, familiarity, and data privacy considerations continue to influence behavior.
The Europ Assistance-Ipsos barometer confirms that Spain is emerging as a frontrunner in adopting AI for travel planning, reflecting both a strong appetite for digital solutions and an evolving approach to how holidays are designed and booked.
Photo Credit: ProStockStudio / Shutterstock.com
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies