Connect with us

AI Insights

Apple Loses Top AI Models Executive to Meta’s Hiring Spree

Published

on




Apple Inc.’s top executive in charge of artificial intelligence models is leaving for Meta Platforms Inc., another setback in the iPhone maker’s struggling AI efforts.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Apple AI Model Head Reportedly Leaving For Meta

Published

on


Apple’s manager for artificial intelligence (AI) models is reportedly leaving for Meta. 

That’s according to a report late Monday (July 7) by Bloomberg News, which notes that this departure is another setback for Apple’s AI project.

Ruoming Pang, the executive in charge of the company’s Apple foundation models team, is leaving the company, sources familiar with the matter told Bloomberg. Pang had joined Apple from Google in 2021, and is the latest high-profile hire for Meta’s new superintelligence group, the sources said.

To land Pang, the sources said, Meta offered a package worth tens of millions of dollars per year. It’s part of a larger hiring spree by Meta CEO Mark Zuckerberg, who has recruited AI leaders such as Scale AI’s Alexandr Wang, startup founder Daniel Gross and former GitHub CEO Nat Friedman.

Also Monday, Meta hired Yuanzhi Li, a researcher from OpenAI, and Anton Bakhtin, who worked on Claude at Anthropic, according to additional sources with knowledge of the matter. Last month, it hired a host of other OpenAI researchers.

PYMNTS wrote about this trend last week, noting that while companies like OpenAI, Anthropic and Thinking Machines were paying large sums for technical staff, “the compensation is far from the eye-watering sums of up to $100 million from Meta.”

OpenAI, Anthropic and Thinking Machines are all paying salaries in the range of $200,000 to $690,000, according to a report by Business Insider,  citing federal filings needed to hire people who require H-1B visas to work in the U.S.

Meta, meanwhile, paid $14.3 billion for a 49% stake in Scale AI, a deal which also saw Wang join the company.

OpenAI CEO Sam Altman has said that Meta is promising signing bonuses of up to $100 million with even bigger yearly compensation packages. But Andrew Bosworth, Meta’s chief technology officer, has said Altman was being “dishonest” by suggesting the nine-figure offer is for “every single person.”

PYMNTS wrote about Apple’s AI struggles last month, noting that the company’s latest product showcase illustrated a philosophy focused more on “measured integration, meticulous design and a deep commitment to user privacy” than “rapid innovation in generative AI.”

“This approach stands in contrast to competitors like Amazon, Google and Microsoft, which are embracing large language models and enterprise-scale AI solutions in aggressive and sometimes experimental ways,” that report added.



Source link

Continue Reading

AI Insights

60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week – The 74

Published

on



Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

Nearly two-thirds of teachers utilized artificial intelligence this past school year, and weekly users saved almost six hours of work per week, according to a recently released Gallup survey. But 28% of teachers still oppose AI tools in the classroom.

The poll, published by the research firm and the Walton Family Foundation, includes perspectives from 2,232 U.S. public school teachers.

“[The results] reflect a keen understanding on the part of teachers that this is a technology that is here, and it’s here to stay,” said Zach Hrynowski, a Gallup research director. “It’s never going to mean that students are always going to be taught by artificial intelligence and teachers are going to take a backseat. But I do like that they’re testing the waters and seeing how they can start integrating it and augmenting their teaching activities rather than replacing them.”

At least once a month, 37% of educators take advantage of tools to prepare to teach, including creating worksheets, modifying materials to meet student needs, doing administrative work and making assessments, the survey found. Less common uses include grading, providing one-on-one instruction and analyzing student data.

A 2023 study from the RAND Corp. found the most common AI tools used by teachers include virtual learning platforms, like Google Classroom, and adaptive learning systems, like i-Ready or the Khan Academy. Educators also used chatbots, automated grading tools and lesson plan generators.

Most teachers who use AI tools say they help improve the quality of their work, according to the Gallup survey. About 61% said they receive better insights about student learning or achievement data, while 57% said the tools help improve their grading and student feedback.

Nearly 60% of teachers agreed that AI improves the accessibility of learning materials for students with disabilities. For example, some kids use text-to-speech devices or translators.

More teachers in the Gallup survey agreed on AI’s risks for students versus its opportunities. Roughly a third said students using AI tools weekly would increase their grades, motivation, preparation for jobs in the future and engagement in class. But 57% said it would decrease students’ independent thinking, and 52% said it would decrease critical thinking. Nearly half said it would decrease student persistence in solving problems, ability to build meaningful relationships and resilience for overcoming challenges.

In 2023, the U.S. Department of Education published a report recommending the creation of standards to govern the use of AI.

“Educators recognize that AI can automatically produce output that is inappropriate or wrong. They are well-aware of ‘teachable moments’ that a human teacher can address but are undetected or misunderstood by AI models,” the report said. “Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in ed tech.”

Researchers have found that AI education tools can be incorrect and biased — even scoring academic assignments lower for Asian students than for classmates of any other race.

Hrynowski said teachers are seeking guidance from their schools about how they can use AI. While many are getting used to setting boundaries for their students, they don’t know in what capacity they can use AI tools to improve their jobs.

The survey found that 19% of teachers are employed at schools with an AI policy. During the 2024-25 school year, 68% of those surveyed said they didn’t receive training on how to use AI tools. Roughly half of them taught themselves how to use it.

“There aren’t very many buildings or districts that are giving really clear instructions, and we kind of see that hindering the adoption and use among both students and teachers,” Hrynowski said. “We probably need to start looking at having a more systematic approach to laying down the ground rules and establishing where you can, can’t, should or should not, use AI In the classroom.”

Disclosure: Walton Family Foundation provides financial support to The 74.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter





Source link

Continue Reading

AI Insights

How terrorist groups are leveraging AI to recruit and finance their operations | Islamic State

Published

on


Counter-terrorism authorities have, for years, characterized keeping up with terrorist organizations and their use of digital tools and social media apps as a game of Whac-a-Mole.

Jihadist terrorist groups such as Islamic State and its predecessor al-Qaida, or even the neo-Nazi group the Base, have leveraged digital tools to recruit, covertly finance via crypto, download weapons for 3D printing and spread tradecraft to its followers, all while leaving law enforcement and intelligence agencies playing catch up.

Over time, thwarting attacks and maintaining the technological advantage over these types of terror groups has evolved, as more and more open source resources become available.

Now, with artificial intelligence – both on the horizon as a rapidly developing technology and in the here and now as free, accessible apps – agencies are scrambling.

Sources familiar with the US government’s counterterrorism efforts told the Guardian that multiple security agencies are very concerned about how AI is making hostile groups more efficient in their planning and operations. The FBI declined to comment on this story.

“Our research predicted exactly what we’re observing: terrorists deploying AI to accelerate existing activities rather than revolutionise their operational capabilities,” said Adam Hadley, the founder and executive director of Tech Against Terrorism, an online counterterrorism watchdog, which is supported by the United Nations Counter-Terrorism Committee Executive Directorate (CTED).

“Future risks include terrorists leveraging AI for rapid application and website development, though fundamentally, generative AI amplifies threats posed by existing technologies rather than creating entirely new threat categories.”

So far, groups such as IS and other adjacent entities, have begun using AI, namely OpenAI’s chatbot, ChatGPT, to amplify recruitment propaganda across multimedia in new and expansive ways. Not unlike the imminent threat it poses to upending modern workforces in dozens of job sectors and is poised to enrich some of the wealthiest people on earth – AI will complicate new public safety issues.

“You take something like a Islamic State news bulletin, you can now turn that into an audio piece,” said Moustafa Ayad, the executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue. “Which we’ve seen supporters do and support groups, too, as well as photo arrays that they produce centrally.”

Ayad continued, echoing Hadley: “A lot of what AI is doing is enabling what’s already there. It’s also supporting their capacity in terms of propaganda and dissemination – it’s a key part of that.”

IS isn’t hiding its fascination with AI and has now openly recognized the opportunity to capitalize on what it currently offers, even providing a “Guide to AI Tools and Risks” to its supporters over an encrypted channel. In one of its latest propaganda magazines, IS outlined the future of AI and how the group needs to embrace it as part of its operations.

“For every individual, regardless of their field or expertise, grasping the nuances of Al has become indispensable,” it wrote in an article. “[AI] isn’t just a technology, it’s becoming a force that shapes war.” In the same magazine, an IS author explains that AI services can be “digital advisors” and “research assistants” for any member.

Over an always active chat room that IS uses to communicate with its followers and recruits, users have begun discussing the many ways AI can be a resource, but some were wary. One user asked if it was safe to use ChatGPT for “how to do explosives” but weren’t sure if agencies were keeping tabs on it – which has become one of the broader privacy concerns surrounding the chatbot since its inception.

“Are there any other options?” asked an online IS supporter in the same chat room. “Safe one.”

But another user found a less obvious way around setting off any alarms if they were being watched: by dropping the schematics and the instructions on how to create a “simple blueprint for Remote Vehicle prototype according to chatgpt”. Truck ramming has become a choice method for IS in recent attacks involving followers and operatives, alike. In March, an IS-linked account also released an AI-created bomb making video with an avatar, for a recipe that can be created with household items.

Far-right groups have also been curious about AI, with one advising followers on how to create disinformation memes, while others have looked to AI for the creation of Adolf Hitler graphics and propaganda.

Ayad said some of these AI-driven tools have also been a “boon” to terror groups and their operational security – techniques to securely communicate without prying eyes – such as encrypted voice modulators that can mask audio, which altogether, “can assist with them further cloaking and enhancing their opsec” and day-to-day tradecraft.

Terror groups have always been at the forefront of maximizing and embracing digital spaces for their growth, AI is just the latest example. In June 2014, IS, still coming into the global public consciousness, live-tweeted imagery and messages of their mass executions of over 1,000 men as they stormed Mosul, which caused soldiers in the Iraqi army to flee in fear. After the eventual establishment of the so-called Caliphate and its increasing cyber operations, what followed was a concerted and coordinated effort across government and Silicon Valley to crackdown on all IS accounts online. Since, western intelligence agencies have singled out crypto, encrypted texting apps, sites where 3D printed guns can be found, among others, as spaces to police and surveil.

But recent cuts to counterterrorism operations across world governments, including some by Doge in the US, have degraded efforts.

“The more pressing vulnerability lies in deteriorating counter-terrorism infrastructure,” said Hadley. “Standards have significantly declined with platforms and governments less focused on this domain.”

Hadley explained how this deterioration is coinciding with “AI-enabled content sophistication” urging companies like Meta and OpenAI, to “reinforce existing mechanisms including hash sharing and traditional detection capabilities” and work to develop more “content moderation” surrounding AI.

“Our vulnerability isn’t new AI capabilities but our diminished resilience against existing terrorist activities online,” he added.



Source link

Continue Reading

Trending