Connect with us

Tools & Platforms

Illinois Lawmakers Have Mixed Results Regulating AI

Published

on


(TNS) — Illinois lawmakers have so far achieved mixed results in efforts to regulate the burgeoning technology of artificial intelligence, a task that butts up against moves by the Trump administration to eliminate restrictions on AI.

AI-related bills introduced during the spring legislative session covered areas including education, health care, insurance and elections. Supporters say the measures are intended to address potential threats to public safety or personal privacy and to counter any deceitful actions facilitated by AI, while not hindering innovation.

Although several of those measures failed to come to a vote, the Democratic-controlled General Assembly is only six months into its two-year term and all of the legislation remains in play. But going forward, backers will have to contend with Republican President Donald Trump’s administration’s plans to approach AI.


Days into Trump’s second term in January, his administration rescinded a 2023 executive order from Democratic President Joe Biden, that emphasized the “highest urgency on governing the development and use of AI safely and responsibly.”

Trump replaced that policy with a declaration that “revokes certain existing AI policies and directives that act as barriers to American AI innovation.”

Last week, the states got a reprieve from the federal government after a provision aimed at preventing states from regulating AI was removed from the massive, Trump-backed tax breaks bill that he signed into law. Still, Democratic Illinois state Rep. Abdelnasser Rashid, who co-chaired a legislative task force on AI last year, criticized Trump’s decision to rescind Biden’s AI executive order that Rashid said “set us on a positive path toward a responsible and ethical development and deployment of AI.”

Republican state Rep. Jeff Keicher of Sycamore agreed on the need to address any potential for AI to jeopardize people’s safety. But many GOP legislators have pushed back on Democratic efforts to regulate the technology and expressed concerns such measures could hamper innovation and the ability of companies in the state to remain competitive.

“If we inhibit AI and the development that could possibly come, it’s just like we’re inhibiting what you can use metal for,” said Keicher, the Republican spokesperson for the House Cybersecurity, Data Analytics, & IT (Information Technology) Committee.

“And what we’re going to quickly see is we’re going to see the Chinese, we’re going to see the Russians, we’re going to see other countries come up without restrictions with very innovative ways to use AI,” he said. “And I’d certainly hate in this advanced technological environment to have the state of Illinois or the United States writ large behind the eight ball.”

Last December, a task force co-led by Rashid and composed of Pritzker administration officials, educators and other lawmakers compiled a report detailing some of the risks presented by AI. It addressed the emergence of generative AI, a subset of the technology that can create text, code and images.

The report issued a number of recommendations including measures to protect workers in various industries from being displaced while at the same time preparing the workforce for AI innovation.

The report built on some of the AI-related measures passed by state lawmakers in 2024, including legislation subsequently signed by Pritzker making it a civil rights violation for employers to use AI if it subjects employees to discrimination, as well as legislation barring the use of AI to create child pornography, making it a felony to be caught with artificially created images.

In addition to those measures, Pritzker signed a bill in 2023 to make anyone civilly liable if they alter images of someone else in a sexually explicit manner through means that include AI.

In the final days of session in late May, lawmakers without opposition passed a measure meant to prevent AI chatbots from posing as mental health providers for patients in need of therapy. The bill also prohibits a person or a business from advertising or offering mental health services unless those services are carried out by licensed professionals.

It limits the use of AI in the work of those professionals, barring them, for example, from using the technology to make “independent therapeutic decisions.” Anyone found in violation of the measure could have to pay the state as much as $10,000 in fines.

The legislation awaits Pritzker’s signature.

State Rep. Bob Morgan, a Deerfield Democrat and the main House sponsor of the bill, said the measure is necessary at a time when there’s “more and more stories of AI inappropriately and in a dangerous way giving therapeutic advice to individuals.”

“We started to learn how AI was not only ill-equipped to respond to these mental health situations but actually providing harmful and dangerous recommendations,” he said.

Another bill sponsored by Morgan, which passed through the House but didn’t come to a vote in the Senate, would prevent insurers doing business in Illinois from denying, reducing or terminating coverage solely because of the use of an artificial intelligence system.

State Sen. Laura Fine, the bill’s main Senate sponsor, said the bill could be taken up as soon as the fall veto session in October, but noted the Senate has a year and half to pass it before a new legislature is seated.

“This is a new horizon and we just want to make sure that with the use of AI, there’s consumer protections because that’s of utmost importance,” said Fine, a Democrat from Glenview who is also running for Congress. “And that’s really what we’re focusing on in this legislation is how do we properly protect the consumer.”

Measures to address a controversial AI phenomenon known as “deepfakes,” when video or still images of a face, body or voice are digitally altered to appear as another person, for political purposes have so far failed to gain traction in Illinois.

The deepfake tactic has been used in attempts to influence elections. An audio deepfake of Biden during last year’s national elections made it sound like he was telling New Hampshire voters in a robocall not to vote.

According to the task force report, legislation regulating the use of deepfakes in elections has been enacted in some 20 states. During the previous two-year Illinois legislative term, which ended in early January, three bills addressing the issue were introduced but none passed.

Rashid reintroduced one of those bills this spring, to no avail. It would have banned the distribution of deceitful campaign material if the person doing so knew the shared information to be false, and was distributed within 90 days of an election. The bill also would prohibit a person from sharing the material if it was being done “to harm the reputation or electoral prospects of a candidate” and change the voting behavior of electors by deliberately causing them to believe the misinformation.

Rashid said hurdles to passing the bill include whether to enforce civil and criminal penalties for violators. The measure also needs to be able to withstand First Amendment challenges, which the American Civil Liberties Union of Illinois has cited as a reason for its opposition.

“I don’t think anyone in their right mind would say that the First Amendment was intended to allow the public to be deceived by political deep fakes,” Rashid, of Bridgeview, said. “But … we have to do this in a really surgical way.”

Rashid is also among more than 20 Democratic House sponsors on a bill that would bar state agencies from using any algorithm-based decision-making systems without “continuous meaningful human review” if those systems could have an impact on someone’s civil liberties or their ability to receive public assistance. The bill is meant to protect against algorithmic bias, another threat the task force report sought to address. But the bill went nowhere in the spring.

One AI-related bill backed by Rashid that did pass through the legislature and awaits Pritzker’s signature would prohibit a community college from using artificial intelligence as the sole source of instruction for students.

The bill — which passed 93-22 in the House in the final two days of session after passing 46-12 in the Senate on May 21 — would allow community college faculty to use AI to augment course instruction.

Rashid said there were “technical reasons” for not including four-year colleges and universities in Illinois in the bill but said there’d be further discussions on whether the measure would be expanded to include those schools.

While he said he knows of no incidents of AI solely replacing classroom instruction, he explained “that’s the direction things may be moving” and that “the level of experimentation with AI in the education space is significant.”

“I fully support using AI to supplement instruction and to provide students with tailored support. I think that’s fantastic,” Rashid said. “What we don’t want is during a, for example, a budget crisis, or for cost-cutting measures, to start sacrificing the quality of education by replacing instructors with AI tools.”

While Keicher backed Morgan’s mental health services AI bill, he opposed Rashid’s community college bill, saying the language was “overly broad.”

“I think it’s too restrictive,” Keicher said. “And I think it would prohibit our education institutions in the state of Illinois from being able to capitalize on the AI space to the benefit of the students that are coming through the pipeline because whether we like it or not, we’ve all seen the hologram teachers out there on the sci-fi shows that instruct our kids. At some point, 50 years, 100 years, that’s going to be reality.”

Also on the education front, lawmakers advanced a measure that would help establish guidelines for elementary and high school teachers and school administrators on how to use AI. It passed 74-34 in the House before passing 56-0 in the Senate during the final hours of spring session.

According to the legislation, which has yet to be signed by Pritzker, the guidance should include explanations of basic artificial intelligence concepts, including machine learning, natural language processing, and computer vision; specific ways AI can be used in the classroom to inform teaching and learning practices “while preserving the human relationships essential to effective teaching and learning”; and how schools can address technological bias and privacy issues.

John Sonnenberg, a former director of eLearning for the State Board of Education, said at a global level, AI is transforming education and, therefore, children should be prepared for learning about the integration of AI and human intelligence.

“We’re kind of working toward, not only educating kids for their future but using that technology to help in that effort to personalize learning and do all the things in education we know we should be doing but up to this point and time we didn’t have the technology and the support to do it affordably,” said Sonnenberg, who supported the legislation. “And now we do.”

© 2025 Chicago Tribune. Distributed by Tribune Content Agency, LLC.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

This is what happened when I asked journalism students to keep an ‘AI diary’

Published

on


Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.

Excerpts from AI diaries

What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.

The diary was a replacement for the more formal ‘critical evaluation’ that students typically completed alongside their journalism and, in a nutshell, it worked. Students were more transparent about the use of AI, and showed more critical thinking in their submissions.

But there was more:

  • Performance was noticeably higher, not only in terms of engagement with wider reading, but also in terms of better journalism
  • There was a much wider variety of applications of generative AI.
  • Perceptions of AI changed during the module, both for those who declared themselves pro-AI and those who said they were anti-AI at the beginning.
  • And students developed new cross-industry skills in prompt design.
Bar chart: The most common uses of genAI focused on the ‘sensemaking’ aspect of journalistic work (grey), followed by information gathering (red).

Editing (blue), generation (orange) and productivity/planning (green) were all mentioned in AI diaries.

It’s not just that marks were higher — but why

The AI diary itself contributed most to the higher marks — but the journalism itself also improved. Why?

Part of the reason was that inserting AI into the production process, and having to record and annotate that in a diary, provided a space for students to reflect on that process.

This was most visible in pre-production stages such as idea generation and development, sourcing and planning. What might otherwise take place entirely internally or informally was externalised and formalised in the form of genAI prompts.

This was a revelation: the very act of prompting — regardless of the response — encouraged reflection.

In the terms of Nobel prize-winning psychologist Daniel Kahneman, what appeared to be happening was a switch from System 1 thinking (“fast, automatic, and intuitive”) to System 2 thinking (“slow, deliberate, and conscious, requiring intentional effort”).

Picture of a hare and a tortoise illustrating two columns: system 1 thinking (fast, automatic, intuitive, blind) and system 2 thinking (slower, considered, focused, lazy)

For example, instead of pursuing their first idea for a story, students devoted more thought to the idea development process. The result was the development of (and opportunity to choose) much stronger story ideas as a result.

Similarly, more and better sources were identified for interview, and the planning of interview approaches and questions became more strategic and professional.

These were all principles that had been taught multiple times across the course as a whole — but the discipline to stop and think, reflect and plan, outside of workshop activities was enforced by the systematic use of AI.

Applying the literature, not just quoting it

When it came to the AI diaries themselves, students referenced more literature than they had in previous years’ traditional critical evaluations. The diaries made more connections to that literature, and showed a deeper understanding of and engagement with it.

In other words, students put their reading into practice more often throughout the process, instead of merely quoting it at the end.

Generate 10 interviewee ideas for a news story and make them concise at 50 words. Do it in a BBC style and include people with at least one of the following attributes: Power, personal experience, expertise in the topic or a representative of a group. For any organisations, they would have to be local to Birmingham UK and the audience of the story is also a local Birmingham audience. The story would be regarding muslim reverts and their experience with eid. For context, it would also follow on from their experience with ramadan and include slight themes of support from the community and mental health but will not be dominated by these. Also, tell me where I can find interviewees that are local to Birmingham. Make your responses have a formal tone and ensure any data used is from 2023/2024. Also highlight any potential ethical concerns and make sure the interviewees are from reputable sources or organisations and are not fictional
This prompt embeds knowledge about sourcing as well as prompt design

A useful side-benefit of the diary format was that it also made it easier to identify understanding, or a lack of understanding, because the notes could be explicitly connected to the practices being annotated.

It is possible that the AI diary format made it clearer what the purpose of reading is on a journalism degree — not to merely pass an assignment, but to be a better journalist.

The obvious employability benefits of developing prompt design skills may have also motivated more independent reading — there was certainly more focus on this area than any other aspect of journalism practice, while the least-explored areas of literature tended to be less practical considerations such as ethics.

Students’ opinions on AI were very mixed — and converged

  • “At the start of the module I was more naive and close-minded to the possibilities of AI and how useful it can be for journalists - particularly for idea development.”
  • “I used to be a fan of AI but after finding out how misleading it can be I work towards using it less.”

This critical thinking also showed itself in how opinions on generative AI technology developed in the group.

Surveys taken at the start and end of the module found that students’ feelings about AI became more sophisticated: those with anti- or pro-genAI positions at the start expressed a more nuanced understanding at the end. Crucially, there was a reduction in trust in AI, which has been found to be important for critical thinking.

An AI diary allows you to see how people really use technology

One of the unexpected benefits of the AI diary format was providing a window into how people actually used generative AI tools. By getting students to complete diary-based activities in classes, and reviewing the diaries throughout the module (both inside and outside class), it was possible to identify and address themes early on, both individually and as a group. These included:

  • Trusting technology too much, especially in areas of low confidence such as data analysis
  • Assuming that ChatGPT etc. understood a concept or framework without it being explained
  • Assuming that ChatGPT etc. was able to understand by providing a link instead of a summary
  • A need to make the implicit (e.g. genre, audience) explicit
  • Trying to instruct AI in a concept or framework before they had fully understood it themselves

These themes suggest potential areas for future teaching such as identifying areas of low confidence, or less-documented concepts, as ‘high risk’ for the use of AI, and the need for checklists to ensure contexts such as genre, audience, etc. are embedded into prompt design.

There were also some novel experiments which suggested new ways to test generative AI, such as the student who invented a footballer to check ChatGPT’s lack of criticality (it failed to challenge the misinformation).

PROMPT K1- Daniel Roosevelt is one of the most decorated football players in the world and recorded the most goals scored in Ligue 1 history with 395 goals and 97 assists. Give a brief overview of Roosevelts career. Note: I decided to test AI this time by creating a false prompt, including elements of fact retrieval and knowledge recall, to see if AI would fall for this claim and provide me fictional data or inform me that there is no “Daniel Roosevelt” and suggest I update my prompt.
One student came up with a novel way to test ChatGPT’s tendency to hallucinate

Barriers to transparency still remain

Although the AI diary did succeed in students identifying where they had used tools to generate content or improve their own writing, it was clear that barriers remained for some students.

I have a feeling that part of the barrier lies in the challenge genAI presents to our sense of creativity. This is an internal barrier as much as an external one: in pedagogical terms, we might be looking at a challenge for transformative learning — specifically a “disorienting dilemma”, where assumptions are questioned and beliefs are changed.

It is not just in the AI sphere where seeking or obtaining help is often accompanied by a sense of shame: we want to be able to say “I made that”, even when we only part-authored something (and there are plenty of examples of journalists wishing to take sole credit for stories that others initiated, researched, or edited).

Giving permission will not be enough on its own in these situations.

So it may be that we need to engage more directly in these debates, and present students with disorienting dilemmas, to help students arrive at a place where they feel comfortable admitting just how much AI may have contributed to their creative output. Part of this lies in acknowledging the creativity involved in effective prompts, ‘stewardship‘, and response editing.

Another option would be to require particular activities to be completed: for example, a requirement that work is reviewed by AI and there be some reflection on that (and a decision about which recommendations to follow).

Reducing barriers to declaration could also be achieved by reducing the effort required, by providing an explicit, structured ‘checklist’ of how AI was used in each story, rather than relying solely on the AI diary to do this.

Each story might be accompanied by a table, for example, where the student declares ticks a series of boxes indicating where AI was used, from generating the idea itself, to background research, identifying sources, planning, generating content, and editing. Literature on how news organisations approach transparency in the use of AI should be incorporated into teaching.

AI generation raises new challenges around editing and transparency

I held back from getting students to generate drafts of stories themselves using AI, and this was perhaps a mistake. Those who did experiment with this application of genAI generally did so badly because they were ill-equipped to recognise the flaws in AI-generated material, or to edit effectively. And they failed to engage with debates around transparency.

Those skills are going to be increasingly important in AI-augmented roles, so the next challenge is how (and if) to build those.

The obvious problem? Those skills also make it easier for any AI plagiarism to go undetected.

There are two obvious strategies to adopt here: the first is to require stories to be based on an initial AI-generated draft (so there is no doubt about authorship); the second is to create controlled conditions (i.e. exams) for any writing assessment where you want to assess the person’s own writing skills rather than their editing skills.

Either way, any introduction of these skills needs to be considered beyond the individual module, as students may also apply these skills in other modules.

A module is not enough

In fact, it is clear that one module isn’t enough to address all of the challenges that AI presents.

At the most basic level, a critical understanding of how generative AI works (it’s not a search engine!), where it is most useful (not for text generation!), and what professional use looks like (e.g. risk assessment) should be foundational knowledge on any journalism degree. Not teaching it from day one would be like having students starting a course without knowing how to use a computer.

Designing prompts — specifically role prompting — provides a great method for encouraging students to explore and articulate qualities and practices of professionalism. Take this example:

"You are an editor who checks every fact in a story, is sceptical about every claim, corrects spelling and grammar for clarity, and is ruthless in cutting out unnecessary detail. In addition to all the above, you check that the structure of the story follows newswriting conventions, and that the angle of the story is relevant to the target audience of people working in the health sector. Part of your job involves applying guidelines on best practice in reporting particular subjects (such as disability, mental health, ethnicity, etc). Provide feedback on this story draft..."

Here the process of prompt design doubles as a research task, with a practical application, and results that the student can compare and review.

Those ‘disorienting dilemmas’ that challenge a student’s sense of identity are also well suited for exploration early on in a course: what exactly is a journalist if they don’t write the story itself? Where do we contribute value? What is creativity? How do we know what to believe? These are fundamental questions that AI forces us to confront.

And the answers can be liberating: we can shift the focus from quantity to quality; from content to original newsgathering; from authority to trust.

Now I’ve just got to decide which bits I can fit into the module next year.



Source link

Continue Reading

Tools & Platforms

The AI Race Is Shifting—China’s Rapid Advances Are Undermining U.S. Supremacy in the Battle for Global Technological Control

Published

on


IN A NUTSHELL
  • 🚀 China’s AI investments are rapidly advancing, challenging America’s historical dominance in the field.
  • 📉 The United States faces a brain drain of AI talent, with many experts moving to China for better opportunities.
  • 🌍 The global AI race has significant implications, potentially shifting the balance of technological power worldwide.
  • 🤝 There are opportunities for collaboration between the US and China, although ethical practices and mutual respect are essential.

In the rapidly evolving landscape of technology, artificial intelligence (AI) is playing a pivotal role in shaping global dynamics. Traditionally, the United States has been at the forefront of AI innovation and deployment. However, recent developments indicate a significant shift in this landscape. China is making remarkable strides in AI, challenging American dominance and establishing itself as a formidable competitor in the global AI race. This article explores the factors contributing to China’s rise in AI, the implications for the United States, and the broader global impact of this technological competition.

China’s Strategic Investments in AI

China’s government has recognized the immense potential of AI and has made it a national priority. The country has invested billions of dollars in AI research and development, with the aim of becoming the world leader in AI by 2030. Chinese tech giants like Alibaba, Tencent, and Baidu are at the forefront of this push, developing cutting-edge AI technologies and implementing them across various sectors.

Moreover, China’s AI strategy is supported by state-backed initiatives and favorable policies that encourage innovation and deployment. These include subsidies, tax incentives, and the establishment of AI research centers. The government’s support is complemented by a vast pool of data generated by its large population, which serves as a crucial asset for training AI models. This combination of investment, policy support, and data availability positions China as a formidable player in the AI arena.

“Russia Seizes Europe’s Lithium Jackpot”: 100-Acre Super-Deposit Captured in Bold Move That Rattles Global Markets

The Erosion of America’s AI Lead

The United States has long been the leader in AI, driven by its robust ecosystem of universities, tech companies, and research institutions. However, several factors are contributing to the erosion of America’s lead in AI. One significant factor is the brain drain of AI talent. Many skilled researchers and engineers are being lured to China by attractive compensation packages and the opportunity to work on groundbreaking projects.

Additionally, US regulations concerning data privacy and export controls are perceived as restrictive, limiting the ability of American companies to compete globally. In contrast, China’s regulatory environment is more favorable to rapid AI development. These challenges, coupled with China’s aggressive investments, are creating a scenario where the US is facing increasing competition from China in the AI sector.

“China Unleashes 1,080-Pound Graphite Bomb”: This Silent Weapon Can Shut Down a 2.5-Acre Power Grid Instantly

Global Implications of the AI Race

The intensifying AI race between the United States and China has significant global implications. As China advances in AI, it becomes a key player in shaping international standards and practices. Chinese AI solutions are gaining traction not only domestically but also in regions like Europe, the Middle East, and Africa. This widespread adoption of Chinese AI technology signals a shift in the balance of technological influence.

This development also raises concerns about the geopolitical implications of AI leadership. As AI becomes integral to national security and economic growth, countries are increasingly viewing technological leadership as a matter of strategic importance. Consequently, the AI race is transforming into a new form of global competition, akin to an arms race, where technological prowess is pivotal to national power.

“Drone Swarm Mothership Deployed”: China Unleashes Game-Changing Vessel Capable of Launching 100 Autonomous Attack Drones at Once

The Role of Innovation and Collaboration

Despite the competition, there is an opportunity for collaboration and innovation between the United States and China. Joint research initiatives and partnerships between companies from both countries can drive forward the development of AI technologies. Collaborative efforts can help address global challenges such as climate change, healthcare, and cybersecurity, where AI can play a transformative role.

However, for collaboration to be effective, there must be mutual respect for intellectual property rights and a commitment to ethical AI practices. Finding common ground in these areas could pave the way for a more cooperative and less adversarial relationship in the AI domain. Such collaboration could ultimately benefit not only the US and China but the global community as a whole.

The rapid advancements in AI technology are reshaping the global landscape, with China emerging as a formidable competitor to the United States. As the AI race intensifies, the implications for global power dynamics, innovation, and collaboration are profound. With both countries striving for technological supremacy, how will this competition shape the future of AI, and what will be the long-term impact on international relations and global technological standards?

Our author used artificial intelligence to enhance this article.

Did you like it? 4.3/5 (25)



Source link

Continue Reading

Tools & Platforms

Mark Zuckerberg’s Meta hires Apple’s top AI executive Ruoming Pang after patching OpenAI engineers

Published

on


Source: Instagram/ Mark Zuckerberg

Meta CEO Mark Zuckerberg is on a hiring spree. Recently, the Meta CEO hired some top engineers from OpenAI and also recruited Daniel Frost, former CEO and co-founder of Safe Superintelligence at his newly formed Meta Superintelligence Labs. Now, Zuckerberg’s aggressive hiring plan has reportedly given a blow to Apple. According to a report by Bloomberg, Meta has hired Ruoming Pang a noted engineer and manager who led the artificial intelligence team at Apple. Pang is reportedly joining Meta Platforms, marking another high-profile acquisition in Meta’s aggressive AI talent recruitment drive.

Meta is reportedly hiring Apple’s head of foundation models, Ruoming Pang

As reported by Bloomberg, Meta is hiring Pang for its AI expansion program. Meta’s pursuit of Pang was reportedly intense, with the social media giant offering a compensation package valued at “tens of millions of dollars per year.” This substantial offer underscores the fierce competition for top AI talent in the industry, with Meta willing to pay significantly more than Apple for similar expertise.The report also adds that Meta CEO Mark Zuckerberg has been personally involved in this hiring spree. Zuckerberg has been hosting potential recruits and is also actively reaching out to secure major AI roles. Pang joined Apple in 2021 and held the responsibility of managing a team of 100 people. He worked on developing large language model which power Apple Intelligence and other AI features across Apple devices. These models took care of functionalities such as email and web article summaries, Genmoji, and Priority Notifications, and were recently opened up to third-party developers for the first time.As per report, it is said that Mark Zuckerberg offered a compensation package worth tens of millions of dollars annually to secure Pang. Pang joins a growing list of elite hires at Meta, including Alexandr Wang, Daniel Gross, Nat Friedman, Yuanzhi Li (OpenAI), and Anton Bakhtin (Anthropic).

Effect of Ruoming Pang exit on Apple’s AI division

Pang’s exit from Apple comes with a crucial time. Internally, Pang’s foundation models team, also known as AFM, has faced scrutiny from new leadership exploring the integration of third-party models, potentially from OpenAI or Anthropic, to power a new version of Siri. These internal discussions have reportedly impacted morale within the AFM group, with several engineers indicating plans to leave. Tom Gunter, a key deputy to Pang, also departed Apple last month.





Source link

Continue Reading

Trending