Connect with us

AI Insights

American Airlines Uses AI to Digitally Transform Travel

Published

on


American Airlines is deepening its deployment of artificial intelligence (AI) to make the travel experience more comfortable for its passengers — including predicting if they’ll miss a flight and waiting for them.

In a recent company podcast, Chief Digital and Information Officer Ganesh Jayaram described the airline’s use of AI to transform both its consumer self-service capabilities and employee productivity efforts.

But first, American set up a governance framework to underpin its AI efforts.

“We spent quite a bit of time over the last year to set up a governance framework … ensuring that we put all the privacy controls and other protections in place to leverage these technologies responsibly,” Jayaram said.

Once the framework was in place, American deployed AI in several use cases. For customers, American embedded generative AI into its existing chatbot to supercharge its capabilities.

In the event of adverse weather that can foil travel plans, the customer can now rebook their flights using the AI chatbot to find alternate routes, Jayaram said.

In the back office, American is using AI to better predict whether its travelers are going to miss their connections to major hubs.

“That might mean that we delay some of our flights so as to accommodate our customers as needed in some of these hubs, and to ensure that they can reach their destination as close to on time as possible,” Jayaram said.

According to aviation data platform OAG, nearly a quarter of all commercial airline flights in the U.S. last year were delayed by at least 15 minutes.

“In an industry where tight operational schedules are planned down to the minute, even small delays have outsized consequences,” according to an OAG blog post.

Not all delays were due to inclement weather; 30% were caused by airline and airport inefficiencies and another 30% were due to aviation system failures from aging air traffic control infrastructure among other factors, the OAG said. 

See also: US Airlines, Hotels Lean Into Digital Tools Amid Uptick in Corporate Travel

A Tech-First Mindset

American embarked on a technology reengineering last year, raising its tech budget by 20%, Jayaram said in an earlier podcast.

The airline also brought in experts with experience in other industries to change workflows outside of technology, Jayaram said.

American focused on three outcomes:

  • Improve resilience of core operations
  • Excellence in engineering
  • Modernize the technology stack to raise productivity

“First and foremost, our job is to ensure that any digital tool or application we put out there is resilient,” Jayaram said. “That it is secure by design, available whenever it is needed, and that it delivers the functionality our customers care about.” 

Resilience is not only about preventing breakdowns, but also about quick recovery. That’s especially critical for travel agents at the gate.

“When things do break down, like it happens even with our frontline operations, we want to make sure that the recovery period is very quick we can get the technology up and running as fast as possible,” Jayaram said.

After any outages that occur, American Airlines reviews what happened to see if there are any processes they need to change to prevent future disruptions. They examine what happened within the company as well as with its suppliers.

Engineering excellence also means investing in modernizing legacy systems to allow for real-time analysis and updates. Internally, the company is using AI coding tools to raise productivity in how developers write code.

Recently, the airline launched a redesigned mobile app that has a “much more modern look and feel — very intuitive, easy to use,” Jayaram said, giving customers more offerings such as self-service features or researching new travel destinations. 

iPhone and Apple Watch users can also turn on “live activities” on their devices to get an update on the departure gate and time, boarding time, assigned seat, time of landing and other travel information, Jayaram added.

At the airport, American implemented new hardware to reduce the check-in time to a couple of minutes.

“End to end, our focus is to really improve interactivity, enable self-service for our customers, and take out friction at as many points of the customer journey as we can,” Jayaram said.

Read more:



Source link

AI Insights

Apple AI Model Head Reportedly Leaving For Meta

Published

on


Apple’s manager for artificial intelligence (AI) models is reportedly leaving for Meta. 

That’s according to a report late Monday (July 7) by Bloomberg News, which notes that this departure is another setback for Apple’s AI project.

Ruoming Pang, the executive in charge of the company’s Apple foundation models team, is leaving the company, sources familiar with the matter told Bloomberg. Pang had joined Apple from Google in 2021, and is the latest high-profile hire for Meta’s new superintelligence group, the sources said.

To land Pang, the sources said, Meta offered a package worth tens of millions of dollars per year. It’s part of a larger hiring spree by Meta CEO Mark Zuckerberg, who has recruited AI leaders such as Scale AI’s Alexandr Wang, startup founder Daniel Gross and former GitHub CEO Nat Friedman.

Also Monday, Meta hired Yuanzhi Li, a researcher from OpenAI, and Anton Bakhtin, who worked on Claude at Anthropic, according to additional sources with knowledge of the matter. Last month, it hired a host of other OpenAI researchers.

PYMNTS wrote about this trend last week, noting that while companies like OpenAI, Anthropic and Thinking Machines were paying large sums for technical staff, “the compensation is far from the eye-watering sums of up to $100 million from Meta.”

OpenAI, Anthropic and Thinking Machines are all paying salaries in the range of $200,000 to $690,000, according to a report by Business Insider,  citing federal filings needed to hire people who require H-1B visas to work in the U.S.

Meta, meanwhile, paid $14.3 billion for a 49% stake in Scale AI, a deal which also saw Wang join the company.

OpenAI CEO Sam Altman has said that Meta is promising signing bonuses of up to $100 million with even bigger yearly compensation packages. But Andrew Bosworth, Meta’s chief technology officer, has said Altman was being “dishonest” by suggesting the nine-figure offer is for “every single person.”

PYMNTS wrote about Apple’s AI struggles last month, noting that the company’s latest product showcase illustrated a philosophy focused more on “measured integration, meticulous design and a deep commitment to user privacy” than “rapid innovation in generative AI.”

“This approach stands in contrast to competitors like Amazon, Google and Microsoft, which are embracing large language models and enterprise-scale AI solutions in aggressive and sometimes experimental ways,” that report added.



Source link

Continue Reading

AI Insights

60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74

Published

on



Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

Nearly two-thirds of teachers utilized artificial intelligence this past school year, and weekly users saved almost six hours of work per week, according to a recently released Gallup survey. But 28% of teachers still oppose AI tools in the classroom.

The poll, published by the research firm and the Walton Family Foundation, includes perspectives from 2,232 U.S. public school teachers.

“[The results] reflect a keen understanding on the part of teachers that this is a technology that is here, and it’s here to stay,” said Zach Hrynowski, a Gallup research director. “It’s never going to mean that students are always going to be taught by artificial intelligence and teachers are going to take a backseat. But I do like that they’re testing the waters and seeing how they can start integrating it and augmenting their teaching activities rather than replacing them.”

At least once a month, 37% of educators take advantage of tools to prepare to teach, including creating worksheets, modifying materials to meet student needs, doing administrative work and making assessments, the survey found. Less common uses include grading, providing one-on-one instruction and analyzing student data.

A 2023 study from the RAND Corp. found the most common AI tools used by teachers include virtual learning platforms, like Google Classroom, and adaptive learning systems, like i-Ready or the Khan Academy. Educators also used chatbots, automated grading tools and lesson plan generators.

Most teachers who use AI tools say they help improve the quality of their work, according to the Gallup survey. About 61% said they receive better insights about student learning or achievement data, while 57% said the tools help improve their grading and student feedback.

Nearly 60% of teachers agreed that AI improves the accessibility of learning materials for students with disabilities. For example, some kids use text-to-speech devices or translators.

More teachers in the Gallup survey agreed on AI’s risks for students versus its opportunities. Roughly a third said students using AI tools weekly would increase their grades, motivation, preparation for jobs in the future and engagement in class. But 57% said it would decrease students’ independent thinking, and 52% said it would decrease critical thinking. Nearly half said it would decrease student persistence in solving problems, ability to build meaningful relationships and resilience for overcoming challenges.

In 2023, the U.S. Department of Education published a report recommending the creation of standards to govern the use of AI.

“Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are well-aware of ‘teachable moments’ that a human teacher can address but are undetected or misunderstood by AI models,” the report said. “Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in ed tech.”

Researchers have found that AI education tools can be incorrect and biased — even scoring academic assignments lower for Asian students than for classmates of any other race.

Hrynowski said teachers are seeking guidance from their schools about how they can use AI. While many are getting used to setting boundaries for their students, they don’t know in what capacity they can use AI tools to improve their jobs.

The survey found that 19% of teachers are employed at schools with an AI policy. During the 2024-25 school year, 68% of those surveyed said they didn’t receive training on how to use AI tools. Roughly half of them taught themselves how to use it.

“There aren’t very many buildings or districts that are giving really clear instructions, and we kind of see that hindering the adoption and use among both students and teachers,” Hrynowski said. “We probably need to start looking at having a more systematic approach to laying down the ground rules and establishing where you can, can’t, should or should not, use AI In the classroom.”

Disclosure: Walton Family Foundation provides financial support to The 74.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter





Source link

Continue Reading

AI Insights

How terrorist groups are leveraging AI to recruit and finance their operations | Islamic State

Published

on


Counter-terrorism authorities have, for years, characterized keeping up with terrorist organizations and their use of digital tools and social media apps as a game of Whac-a-Mole.

Jihadist terrorist groups such as Islamic State and its predecessor al-Qaida, or even the neo-Nazi group the Base, have leveraged digital tools to recruit, covertly finance via crypto, download weapons for 3D printing and spread tradecraft to its followers, all while leaving law enforcement and intelligence agencies playing catch up.

Over time, thwarting attacks and maintaining the technological advantage over these types of terror groups has evolved, as more and more open source resources become available.

Now, with artificial intelligence – both on the horizon as a rapidly developing technology and in the here and now as free, accessible apps – agencies are scrambling.

Sources familiar with the US government’s counterterrorism efforts told the Guardian that multiple security agencies are very concerned about how AI is making hostile groups more efficient in their planning and operations. The FBI declined to comment on this story.

“Our research predicted exactly what we’re observing: terrorists deploying AI to accelerate existing activities rather than revolutionise their operational capabilities,” said Adam Hadley, the founder and executive director of Tech Against Terrorism, an online counterterrorism watchdog, which is supported by the United Nations Counter-Terrorism Committee Executive Directorate (CTED).

“Future risks include terrorists leveraging AI for rapid application and website development, though fundamentally, generative AI amplifies threats posed by existing technologies rather than creating entirely new threat categories.”

So far, groups such as IS and other adjacent entities, have begun using AI, namely OpenAI’s chatbot, ChatGPT, to amplify recruitment propaganda across multimedia in new and expansive ways. Not unlike the imminent threat it poses to upending modern workforces in dozens of job sectors and is poised to enrich some of the wealthiest people on earth – AI will complicate new public safety issues.

“You take something like a Islamic State news bulletin, you can now turn that into an audio piece,” said Moustafa Ayad, the executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue. “Which we’ve seen supporters do and support groups, too, as well as photo arrays that they produce centrally.”

Ayad continued, echoing Hadley: “A lot of what AI is doing is enabling what’s already there. It’s also supporting their capacity in terms of propaganda and dissemination – it’s a key part of that.”

IS isn’t hiding its fascination with AI and has now openly recognized the opportunity to capitalize on what it currently offers, even providing a “Guide to AI Tools and Risks” to its supporters over an encrypted channel. In one of its latest propaganda magazines, IS outlined the future of AI and how the group needs to embrace it as part of its operations.

“For every individual, regardless of their field or expertise, grasping the nuances of Al has become indispensable,” it wrote in an article. “[AI] isn’t just a technology, it’s becoming a force that shapes war.” In the same magazine, an IS author explains that AI services can be “digital advisors” and “research assistants” for any member.

Over an always active chat room that IS uses to communicate with its followers and recruits, users have begun discussing the many ways AI can be a resource, but some were wary. One user asked if it was safe to use ChatGPT for “how to do explosives” but weren’t sure if agencies were keeping tabs on it – which has become one of the broader privacy concerns surrounding the chatbot since its inception.

“Are there any other options?” asked an online IS supporter in the same chat room. “Safe one.”

But another user found a less obvious way around setting off any alarms if they were being watched: by dropping the schematics and the instructions on how to create a “simple blueprint for Remote Vehicle prototype according to chatgpt”. Truck ramming has become a choice method for IS in recent attacks involving followers and operatives, alike. In March, an IS-linked account also released an AI-created bomb making video with an avatar, for a recipe that can be created with household items.

Far-right groups have also been curious about AI, with one advising followers on how to create disinformation memes, while others have looked to AI for the creation of Adolf Hitler graphics and propaganda.

Ayad said some of these AI-driven tools have also been a “boon” to terror groups and their operational security – techniques to securely communicate without prying eyes – such as encrypted voice modulators that can mask audio, which altogether, “can assist with them further cloaking and enhancing their opsec” and day-to-day tradecraft.

Terror groups have always been at the forefront of maximizing and embracing digital spaces for their growth, AI is just the latest example. In June 2014, IS, still coming into the global public consciousness, live-tweeted imagery and messages of their mass executions of over 1,000 men as they stormed Mosul, which caused soldiers in the Iraqi army to flee in fear. After the eventual establishment of the so-called Caliphate and its increasing cyber operations, what followed was a concerted and coordinated effort across government and Silicon Valley to crackdown on all IS accounts online. Since, western intelligence agencies have singled out crypto, encrypted texting apps, sites where 3D printed guns can be found, among others, as spaces to police and surveil.

But recent cuts to counterterrorism operations across world governments, including some by Doge in the US, have degraded efforts.

“The more pressing vulnerability lies in deteriorating counter-terrorism infrastructure,” said Hadley. “Standards have significantly declined with platforms and governments less focused on this domain.”

Hadley explained how this deterioration is coinciding with “AI-enabled content sophistication” urging companies like Meta and OpenAI, to “reinforce existing mechanisms including hash sharing and traditional detection capabilities” and work to develop more “content moderation” surrounding AI.

“Our vulnerability isn’t new AI capabilities but our diminished resilience against existing terrorist activities online,” he added.



Source link

Continue Reading

Trending