Connect with us

Tools & Platforms

Nvidia says H20 export controls didn’t stop China’s AI progress — claims ‘they only stifled U.S. economic and technology leadership’

Published

on


Nvidia shared an opinion piece by Aaron Ginn, co-founder of AI company Hydra Host, stating that despite the U.S.’s export controls on Nvidia’s H20 chips, China continued to achieve AI breakthroughs. Nvidia shared Ginn’s thoughts on the matter on X, saying that Washington’s bans only held it back from expanding its influence.

“H20 export controls didn’t slow China — they only stifled U.S. economic and technology leadership,” Nvidia said on the social media platform. “For the U.S. to win the AI race, America’s full-stack platform must remain the global standard.” It then linked to Ginn’s op-ed in the Wall Street Journal.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Doomprompting: Endless tinkering with AI outputs can cripple IT results

Published

on


“Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.”

Agents of doom

Observers see two versions of doomprompting, with one example being an individual’s interactions with an LLM or another AI tool. This scenario can play out in a nonwork situation, but it can also happen during office hours, with an employee repeatedly tweaking the outputs on, for example, an AI-generated email, line of code, or research query.

The second type of doom prompting is emerging as organizations adopt AI agents, says Jayesh Govindarajan, executive vice president of AI at Salesforce. In this scenario, an IT team continuously tweaks an agent to find minor improvements in its output.



Source link

Continue Reading

Tools & Platforms

A Human Development Conversation with Paul Makdissi – Arab Reform Initiative

Published

on



Paul Makdissi, Professor in the Department of Economics at the University of Ottawa, has long focused on poverty, inequality, and human development. While not a technology specialist, he has reflected deeply on how artificial intelligence (AI) and digitalization interact with labor markets, freedoms, and social protection systems. In this interview, he shares his perspective on the opportunities and risks these transformations pose for the Arab region, and on how social protection systems can adapt to ensure inclusive human development.

Why approach AI and digitalization from a human development lens?

While it is important to assess the impact of policies on indicators like poverty and inequality, it is also equally important to think beyond policy blueprints or expert prescriptions and rather offer a space for reflections and questions, shaped by our research, readings, and meta-analysis of human development as the central goal that should guide public policy, especially in times of rapid change. Such reflections on how AI and digitalization impact human development, the labor market, and fundamental freedoms, with a special focus on the implications for social protection systems, particularly in the Arab region, are key for a genuine understanding of the roots of current transformations and the dynamics they entail.

What is social protection meant to do in the AI and digitalization era?

Social protection policies exist to protect citizens and residents from shocks to their human development outcomes, i.e., shocks that threaten their ability to flourish and lead the lives they have reason to value. These shocks can come from illness, job loss, political unrest, or major technological transformations. AI and digitalization, while enhancing production capacities, are also a source of profound disruption. If we are serious about human development, understood not as GDP growth but as the expansion of people’s right to live the lives they have reasons to value, then we must think carefully about how the technological revolution interacts with that objective.

How will AI reshape labor markets, and what does “creative destruction,” as you call it, mean for people?

AI is not just another innovation. It is part of a broader technological shift, similar to previous industrial revolutions, that is likely to fundamentally reshape labor markets and societies. Schumpeterian models of economic growth have long taught us that innovation is a process of “creative destruction.” New technologies emerge, rendering some production processes and some labor skills obsolete. Resources are reallocated. The overall productive capacity of the economy improves, but not without casualties along the way. These models tend to present this dynamic as clean and inevitable, almost sanitized: a necessary adjustment on the path to higher growth.

Behind this sanitized macroeconomic narrative hides the experience of this “destruction” at the individual level. When entire occupations disappear, when once-valued skills lose their market relevance, people experience the collapse of their careers and lose their livelihoods. These are shocks to human development. Social protection systems must aim to protect individuals from them, not just with temporary income support but with substantive pathways to reintegrate into the transformed labor market. In addition to the social justice aspect of protecting individuals, a well-designed system that offers retraining and up/re-skilling can also promote economic growth by reducing the social cost of structural changes and, in turn, lowering resistance to technological change.

How could AI and digitalization deepen inequality and concentrate power?

The substitution of labor by machines and algorithms also raises additional concerns about income inequality. In a market economy, an individual’s claim on their share of national income depends mainly on the production factors they own, including their own labor. Many people only own their labor as a factor of production. When AI and digital technologies reduce labor demand, the benefits of growth become increasingly concentrated in the hands of those who own the machines, codes, and data. In this context, millions of workers may lose their source of income while a small group reaps the benefit of this technological change. This will lead to an increase in income inequality that needs to be addressed if we want to avoid the concentration of political influence and potentially social unrest. For this reason, AI is not only an economic challenge but also a political one.

Which jobs are most vulnerable to automation – what does teaching reveal?

This potential impact on the labor market leads us to reflect on who could be most impacted by this technological revolution. AI tools have fundamentally altered how we approach teaching and assessment, especially in quantitative courses. Traditional take-home assignments have become obsolete at the undergraduate level. When we test assignments using AI, the technology consistently produces near-perfect responses that would earn top marks. This means that every student now has access to what appears to be A+ work, regardless of their actual understanding of the material. The undergraduate take-home assessment landscape has essentially been flattened.

However, graduate-level take-home assignments tell a different story. Assignments generated with AI clearly reveal the students’ limitations. When graduate students rely too heavily on AI, their work reads like that of someone who has not fully grasped the complexity of the subject matter. The gap between surface-level competence and true understanding becomes immediately apparent. The teaching experience largely reflects the reality of current societies, where not only low-skilled workers are prone to job losses and displacements due to automation, but also many high-skilled workers, within certain thresholds/parameters, who need upskilling or reskilling support that they often do not receive.

What distinguishes tasks that AI can automate from those requiring judgment?

Looking at teaching again, for an easier response, we can tell that one important difference lies in the nature of the take-home assignments themselves. At the undergraduate level, the assignments we give tend to involve specific and well-defined problems with clear instructions on what to compute and estimate. This structure makes them especially vulnerable to automation. As for graduate-level assignments, by contrast, they often require students to define the problem themselves, select appropriate methods, and justify their choices. This requires judgment, interpretation, and critical thinking. These are the tasks for which the limitations of AI become evident – ones that need more original input and a more complex understanding from/ by the human intellect.

What are the implications for youth employment in the Arab region?

All that has been said suggests that jobs most vulnerable to automation are likely those that rely on routine cognitive tasks, often the domain of workers with intermediate levels of education. From my perspective, in many fields, AI plays the role of an assistant with good programming and quantitative skills. This fact can impact the future of work for millions of young people in the Arab region, many of whom are already struggling to find meaningful employment. While technological innovation has always destroyed some jobs while creating others, the process is rarely painless, and the benefits are not automatically shared. Our educational institutions need to prepare students not only for what the labor market looks like today, but for what it might become tomorrow. Education and training programs should focus on developing skills that complement AI rather than those that are substitutable by it, emphasizing creative thinking and complex problem-solving.

Beyond income, how does AI challenge human development and freedoms?

This question leads me to another and more profound concern that may not be captured by the usual way we define and measure human development. The traditional focus on income, health, and education has been useful and remains important, but it is not sufficient. If we are to take Amartya Sen’s capability approach seriously, we must recognize that development is not about accumulating resources but about expanding people’s real freedoms, i.e., their ability to lead lives they have reason to value.

Resources like income, health, and schooling only matter to the extent that individuals have the autonomy and empowerment to turn them into valued outcomes. Freedom to choose, to aspire, to express, and to participate are not luxuries. They are essential elements of human development. Any restriction on these freedoms, whether through poverty, social norms, political repression, or digital manipulation, undermines the expansion of human capabilities.

What risks do digital platforms pose to autonomy, voice, and public discourse?

Unfortunately, AI and digitalization raise new and serious challenges in this regard. They have enabled the rise of a digital economy dominated by a handful of powerful global actors controlling social media platforms. The owners of these platforms can incorporate their own political preferences into the algorithms that control the flow of information.  In the Arab context, the recent war in Gaza has clearly demonstrated how tech giants can effectively suppress certain narratives, either by deplatforming users or by rendering their content practically invisible. In this context, the cultural biases of digital capital owners shape what people see, believe, and discuss, with significant implications for public discourse and democratic participation.

Why are these challenges heightened in the Arab region?

The problem is particularly acute in the Arab region, where many citizens already live in highly controlled media environments. Being “at odds” with the worldview of Silicon Valley adds to this structural vulnerability. When global digital platforms filter and distort voices from Arab countries, whether intentionally or through indifference, they reduce the space for contestation and critique. This erosion of voice and agency represents a significant issue in human development. The ability to speak, to dissent, and to tell one’s story are freedoms that matter.

What has changed about surveillance, and why does it matter?

Concerns about surveillance and control are not a new phenomenon. Back in my student days, some decades ago, a few of my comrades were convinced and worried that the national police had us under surveillance. I used to laugh and say, “Sure, they probably want to spy on us, but they have a budget constraint like any government agency. Do you really think you are that important?” It was a way to tease their paranoia while poking fun at our own sense of self-importance.

Today, that joke falls flat. The economics of surveillance have undergone a fundamental shift. With the amount of information people now share voluntarily on social networks and the power of AI to process and cross-reference data, it is not only possible but also cheap for employers, governments, or private actors to monitor large populations. We are entering an era where surveillance is ambient and automatic. This shift changes the balance of power between individuals and institutions in ways we are only beginning to understand.

How does pervasive surveillance erode creativity, autonomy, and development?

Pervasive surveillance poses fundamental threats to human freedom and ultimately other dimensions of human development. When people know their ideas are being monitored, they begin to self-censor, tempering their expressions, avoiding certain associations, and retreating from activities that might invite scrutiny. This self-censorship gradually erodes individual autonomy itself and erodes the spirit that drives creativity and growth. For all these reasons, potential erosion of individual freedom must be protected by robust national and international regulatory frameworks.

How should social protection systems adapt to accommodate all these changes, and what evidence and data do we need?

This question leads us to the central role of social protection policies in countering many of the above-described repercussions. Protecting people’s livelihoods and welfare, as part of fulfilling a basic human right, irrespective of any identity or other factors, minimizes the economic impact or labor market shifts or – at least – delays it, allowing people more time to cope and adjust. Enjoying income security and unconditional access to social services also makes people less susceptible to the social and political effects of digitalization, including surveillance and censorship, as they feel economically safe or empowered.

If we aim to develop social protection systems that adapt to the realities of digital technology and AI, these policies should be evidence-based. They must be monitored, evaluated, and adjusted based on actual outcomes, not assumptions. In the Arab region, we often lack the data needed to do this properly. Household surveys are far too rare, and when they do occur, they are not conducted regularly enough to track meaningful change.

Why is regular, accessible data essential for equitable policy?

This represents a missed opportunity. Any serious attempt to build resilient, equitable, and forward-looking social protection systems must include a commitment to collecting better data regularly, not just on income and labor market status but also on education, health, freedom, and empowerment. Data from household surveys are essential, as they allow us to assess both the overall average in each dimension of human development and the level of socioeconomic inequalities within those dimensions. Any serious measure of human development should take into account not only average achievements but also the distribution of those achievements across different social and economic groups, thus capturing the disproportionate impact on the most vulnerable.

How can qualitative and quantitative research be combined effectively?

Information about freedom and empowerment is crucial for fully capturing the potential impact of AI and digitalization on social protection and human development. This information would likely take the form of ordinal variables, such as responses to Likert-scale questions. However, these questions should not be designed arbitrarily or drawn from the imagination of statisticians or quantitative economists. Instead, they must be grounded in in-depth qualitative research, including key informant interviews and focus group discussions with all social groups. Just as important, once collected, these survey data should be made widely available to quantitative social science researchers, whose analyses can generate valuable insights to inform more effective and equitable policymaking. Only with this information can we begin to understand whether our policies are enhancing or constraining human development and for whom.

What guiding principle should steer social policy in the AI era?

This question takes me back to my starting point: the objective should be to protect and promote human development for all. This requires us to be clear about what is at stake as we enter the AI era: not just jobs and growth, but dignity, autonomy, freedom, and the possibility of a meaningful life.

What kind of policymaking mindset is needed moving forward?

To address the challenges of AI and digitalization effectively, we need social policies that are responsive and inclusive. Policies should not be shaped by the buzzwords of international organizations and colonial or mainstream agencies. We need a genuine commitment to fairness and freedom. This implies that policy evaluation should be grounded in humility and a recognition of the limits of our knowledge. This means that we remain open to constant re-evaluation of the performance of these policies. In the Arab context, this will require not only policy and institutional reform, but also a renewed investment in knowledge: in listening to people, gathering more frequent data and learning from it, and being willing to ask, again and again, whether our development efforts truly expand human capabilities or merely serve the interests of political and economic elites. The question is not whether we can adapt to AI through social protection policies, but whether we can ensure that both AI and social protection serve the cause of inclusive human development.

The views represented in this paper are those of the author(s) and do not necessarily reflect the views of the Arab Reform Initiative, its staff, or its board.





Source link

Continue Reading

Tools & Platforms

Ally CIO: Pace of tech change ‘weighs on me’

Published

on


Since the July rollout of Ally’s proprietary artificial intelligence platform, the breadth of use is what’s surprised Sathish Muthukrishnan, the bank’s chief information, data and digital officer.

“We have people in the sales force that are using it, people in the operations side, customer care associates using it; obviously, folks in the technology side; marketing; our risk control partners, risk compliance; audit, privacy – they’re all big users of it,” said Muthukrishnan, who’s been in his role at the digital bank since 2019.

The Detroit-based lender gave its 10,000 employees access to Ally.ai two months ago, after testing it with a smaller group for more than a year. About 400,000 prompts have been submitted to the platform, and adoption is at about 50%. 

The bank wants employees to use the platform, which was built in-house, to handle tasks such as drafting emails and proofreading copy, to free up their time for other projects. 

When asked how AI might affect the company’s headcount, Muthukrishnan said it’s set to “have a meaningful impact on the business outcomes.”

Ally has “ambitious” growth plans, so for the company to generate more revenue while maintaining current spending levels, “technology and AI become critical,” Muthukrishnan said in a recent interview with Banking Dive. “That’s both driving efficiency and effectiveness. It’s not just efficiency of cost; it’s efficiency of speed.” 

Editor’s note: This interview has been edited for clarity and brevity.

BANKING DIVE: Where does Ally go from here with AI?

SATHISH MUTHUKRISHNAN: Since the launch, there is tremendous demand and a lot of use cases coming our way. Now, let’s turn the tables and see how we can identify use cases that are harder to solve on the business side, and how do we bring that to the forefront? 

With the pace at which technology is evolving, something that seems impossible, something that seems super hard to solve right now, we will be able to solve in a few months. So we want to tackle those hard problems now, and we want to do it collectively across the organization. 

Our CEO has asked me to come and educate the entire executive committee on how we are advancing in AI, and we’re going to call it an executive committee AI day, and it’s just purely to set aside dedicated time, bring us all together, fully focused on AI. These are all busy people running big organizations, so there’s a little bit of pressure on making sure that I use their time efficiently. But we’re going to talk about what are the things that we can collectively solve for the company. We have thoughtfully rolled out AI, and there is interest across the company, but we need to bring the company along.

How has Ally’s AI governance approach evolved since implementation?

It might sound like a cliche, but we focus on doing simple things savagely well. Things that are simple – having risk controls, having data protection, having access controls – can be cast aside because you see the shinier object. 

For us, to have an AI working group, then having an AI governance steering council, then having an enterprise-level committee, then the board – having this many levels of governance to ensure that AI is scaled safely and responsibly is super critical. We did the hard work ahead of time, we have exercised this governance muscle extremely well, and people have gotten used to it.

How do you see the role of AI agents evolving at Ally in the coming years?

Agentic AI allows you to look at the complicated paths, complicated processes, and allows you to digitize that. It’s still in an experimental stage for us. 

For example, all applications in our tech ecosystem have observability. If there is an issue, we want to be the first to find out, before the customer finds out, or our business partner finds out. So a ton of alerts come our way. If I have to process those alerts, but not increase my headcount as I’m increasing the number of customers, I’m looking at agentic AI to do that. The usage of digital has doubled in the last four years by our customers, but the cost of serving them has gone down. That’s because of the introduction of new technology. 

If you want somebody to reset your password, that could be agentic AI that does that internally. Those are some of the experiments that we are doing; nothing that is in production or at scale yet.



Source link

Continue Reading

Trending