Tools & Platforms
AI Workplace Insights: Attitudes and Optimism –

Some useful insights here on attitudes towards AI in the workplace – are millions at risk of unemployment via replacement? Maybe not, perhaps AI is a toolkit that streamlines admin but doesn’t completely replace its core function.
UK workers are increasingly aware of how artificial intelligence (AI) may reshape their jobs, and show greater optimism than European peers, according to new findings from ADP Research. The research reveals that while 88% of UK respondents have formed views on AI’s impact, 14% strongly believe it will improve their work – placing UK workers above the European average of 11% and ahead of major economies including Germany and France.
This is according to ADP research, in a global study, surveying 38,000 working adults across six continents, including 1,113 in the UK, to gain a comprehensive understanding of their feelings toward AI and its potential impact on their jobs. The ‘People at Work 2025’ report series provides insights on the labour market from the perspective of workers explored respondents’ views on AI, their familiarity and openness to it, and their concerns about job displacement related to the technology.
The report highlights a nuanced picture of how workers are engaging with AI – balancing informed awareness with selective optimism. While UK workers demonstrate above-average confidence in AI’s benefits compared with European peers, the research reveals opportunities for businesses to build on this foundation.
“UK workers are demonstrating a measured approach to AI that positions businesses for success” said Jeff Phipps, General Manager for the UK and Northern Europe. “Many understand how AI could reshape their roles, but they are realistic about challenges, combined with low replacement fears, this creates an ideal foundation for AI adoption. New technologies like generative AI are meant to give teams enhanced capabilities to save time, simplify their daily tasks, and free them from time-consuming work, but they are not intended to replace them.”.
He continued: “The opportunity for businesses is clear. UK employers who acknowledge this emotional complexity and invest in upskilling their people will be best placed to unlock AI’s full potential and build a resilient, future-ready workforce.”
With just 12% of UK workers strongly agreeing they have “no idea” how AI will change their jobs, the vast majority have already processed what AI might mean for their roles.
Key UK findings
- Low resistance, high awareness: With only 9% fearing job replacement – well below global anxiety levels – UK workers show openness to AI transformation when properly supported.
- Sector leadership emerging: UK professionals in tech, finance, and IT are leading the way in AI optimism, with nearly one in five in technology services expressing a positive outlook, followed by 18% in finance and insurance and 17% in IT, suggesting early adoption success stories in key UK industries.
- Knowledge workers ready to lead: 24% of knowledge workers, such as programmers, academics and engineers, globally see AI benefits, with UK knowledge workers well-positioned to capitalise on AI opportunities through training and support initiatives.
Key Global Findings from the People at Work 2025: Artificial Intelligence
- Mixed feelings dominate: While 17% of workers strongly agree that AI will positively influence their job in the next year, and 33% agree, overall feelings are mixed. Only 1 in 10 workers strongly agree to feeling scared that AI will replace their job.
- Hope and Concerns in Tandem: Interestingly, 27% of workers that believe AI will positively impact their jobs also fear that technology might replace them. For example, markets with the most optimistic outlook on AI, such as Egypt or India, also have the highest share of workers fearing replacement. This indicates that AI evokes both excitement about its potential and concerns of its ultimate effects.
- The unknown reinforces anxiety: A significant portion of people (44% combined agree/strongly agree) have no idea how AI will change their jobs. This uncertainty can contribute to anxiety, with some markets showing a large share of workers who fear replacement also having a large share who are unsure about AI’s impact.
Differences by Industry and Work Type
- Early adopters are more optimistic: People working in technology services, finance, insurance, and information sectors are more likely to have a positive outlook on AI’s impact but also express higher concerns about being replaced. The sectors prize efficiency and competitive advantage, which AI can enhance.
- Human-centric sectors show caution: Industries heavily reliant on human interactions, such as healthcare and social assistance, express greater concerns about AI’s impact.
- Age: Younger workers in the UK (18-26 and 27-39) are more likely to show both optimism and concern about AI, considering its long-term effects on their careers. Late-career workers (55+) tend to show more indifference, believing AI will have little impact on their remaining working years.
- Region: UK workers are the most receptive in Europe, where 14% believe AI will positively impact their jobs compared to averages of 11% across the continent. The Middle East/Africa region shows the highest percentage of workers strongly believing AI will positively influence their job (27%). While Japan and Sweden show the lowest (4% and 6% respectively) compared to 13% in North America, 16% in APAC, 19% in LatAm and 27% in Middle East/Africa.
- Stress and job seeking: Workers who fear being replaced by technology are twice as likely to report experiencing high stress at work. Additionally, over 30% of people who strongly believe AI could replace them are actively seeking new employment, compared to 16% of those less concerned.
The full People at Work report is available for free download here
For more insights and analysis on the world of work visit www.ADPresearch.com
Related
Tools & Platforms
How International is building AI into a 200-year-old culture of transformation

In the second circle are our business process areas such as finance, commercial operations and aftersales, procurement, R&D, production and logistics, and more. These are the mature domains where AI will provide the most business value. Data liquidity is the third circle. What available data sets do we have now that don’t need transformation? The overlapping area is where we have immediate value creation opportunities.
What’s been the early output of your approach to baking AI into your transformation?
In March, we were initiating AI within certain functions, and today we have more than 50 cross-domain, enterprise-wide business ideas in backlog, and we’re kicking off three beta use cases now. By clearly weighing business value alongside our ability to act and execute, rather than relying on generic decision criteria, we’re able to select lead management for our commercial business, spend analytics for procurement, and dealer-network customer service for aftersales. Using this lens, we also brought in tech, product, operations, and external partners into a co-innovation lab with pods for each business area. The goal of the lab is to develop an agile use-case delivery model, which we’ll fine-tune and scale beginning in 2026.
Tools & Platforms
Doomprompting: Endless tinkering with AI outputs can cripple IT results

“Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.”
Agents of doom
Observers see two versions of doomprompting, with one example being an individual’s interactions with an LLM or another AI tool. This scenario can play out in a nonwork situation, but it can also happen during office hours, with an employee repeatedly tweaking the outputs on, for example, an AI-generated email, line of code, or research query.
The second type of doom prompting is emerging as organizations adopt AI agents, says Jayesh Govindarajan, executive vice president of AI at Salesforce. In this scenario, an IT team continuously tweaks an agent to find minor improvements in its output.
Tools & Platforms
A Human Development Conversation with Paul Makdissi – Arab Reform Initiative

Paul Makdissi, Professor in the Department of Economics at the University of Ottawa, has long focused on poverty, inequality, and human development. While not a technology specialist, he has reflected deeply on how artificial intelligence (AI) and digitalization interact with labor markets, freedoms, and social protection systems. In this interview, he shares his perspective on the opportunities and risks these transformations pose for the Arab region, and on how social protection systems can adapt to ensure inclusive human development.
Why approach AI and digitalization from a human development lens?
While it is important to assess the impact of policies on indicators like poverty and inequality, it is also equally important to think beyond policy blueprints or expert prescriptions and rather offer a space for reflections and questions, shaped by our research, readings, and meta-analysis of human development as the central goal that should guide public policy, especially in times of rapid change. Such reflections on how AI and digitalization impact human development, the labor market, and fundamental freedoms, with a special focus on the implications for social protection systems, particularly in the Arab region, are key for a genuine understanding of the roots of current transformations and the dynamics they entail.
What is social protection meant to do in the AI and digitalization era?
Social protection policies exist to protect citizens and residents from shocks to their human development outcomes, i.e., shocks that threaten their ability to flourish and lead the lives they have reason to value. These shocks can come from illness, job loss, political unrest, or major technological transformations. AI and digitalization, while enhancing production capacities, are also a source of profound disruption. If we are serious about human development, understood not as GDP growth but as the expansion of people’s right to live the lives they have reasons to value, then we must think carefully about how the technological revolution interacts with that objective.
How will AI reshape labor markets, and what does “creative destruction,” as you call it, mean for people?
AI is not just another innovation. It is part of a broader technological shift, similar to previous industrial revolutions, that is likely to fundamentally reshape labor markets and societies. Schumpeterian models of economic growth have long taught us that innovation is a process of “creative destruction.” New technologies emerge, rendering some production processes and some labor skills obsolete. Resources are reallocated. The overall productive capacity of the economy improves, but not without casualties along the way. These models tend to present this dynamic as clean and inevitable, almost sanitized: a necessary adjustment on the path to higher growth.
Behind this sanitized macroeconomic narrative hides the experience of this “destruction” at the individual level. When entire occupations disappear, when once-valued skills lose their market relevance, people experience the collapse of their careers and lose their livelihoods. These are shocks to human development. Social protection systems must aim to protect individuals from them, not just with temporary income support but with substantive pathways to reintegrate into the transformed labor market. In addition to the social justice aspect of protecting individuals, a well-designed system that offers retraining and up/re-skilling can also promote economic growth by reducing the social cost of structural changes and, in turn, lowering resistance to technological change.
How could AI and digitalization deepen inequality and concentrate power?
The substitution of labor by machines and algorithms also raises additional concerns about income inequality. In a market economy, an individual’s claim on their share of national income depends mainly on the production factors they own, including their own labor. Many people only own their labor as a factor of production. When AI and digital technologies reduce labor demand, the benefits of growth become increasingly concentrated in the hands of those who own the machines, codes, and data. In this context, millions of workers may lose their source of income while a small group reaps the benefit of this technological change. This will lead to an increase in income inequality that needs to be addressed if we want to avoid the concentration of political influence and potentially social unrest. For this reason, AI is not only an economic challenge but also a political one.
Which jobs are most vulnerable to automation – what does teaching reveal?
This potential impact on the labor market leads us to reflect on who could be most impacted by this technological revolution. AI tools have fundamentally altered how we approach teaching and assessment, especially in quantitative courses. Traditional take-home assignments have become obsolete at the undergraduate level. When we test assignments using AI, the technology consistently produces near-perfect responses that would earn top marks. This means that every student now has access to what appears to be A+ work, regardless of their actual understanding of the material. The undergraduate take-home assessment landscape has essentially been flattened.
However, graduate-level take-home assignments tell a different story. Assignments generated with AI clearly reveal the students’ limitations. When graduate students rely too heavily on AI, their work reads like that of someone who has not fully grasped the complexity of the subject matter. The gap between surface-level competence and true understanding becomes immediately apparent. The teaching experience largely reflects the reality of current societies, where not only low-skilled workers are prone to job losses and displacements due to automation, but also many high-skilled workers, within certain thresholds/parameters, who need upskilling or reskilling support that they often do not receive.
What distinguishes tasks that AI can automate from those requiring judgment?
Looking at teaching again, for an easier response, we can tell that one important difference lies in the nature of the take-home assignments themselves. At the undergraduate level, the assignments we give tend to involve specific and well-defined problems with clear instructions on what to compute and estimate. This structure makes them especially vulnerable to automation. As for graduate-level assignments, by contrast, they often require students to define the problem themselves, select appropriate methods, and justify their choices. This requires judgment, interpretation, and critical thinking. These are the tasks for which the limitations of AI become evident – ones that need more original input and a more complex understanding from/ by the human intellect.
What are the implications for youth employment in the Arab region?
All that has been said suggests that jobs most vulnerable to automation are likely those that rely on routine cognitive tasks, often the domain of workers with intermediate levels of education. From my perspective, in many fields, AI plays the role of an assistant with good programming and quantitative skills. This fact can impact the future of work for millions of young people in the Arab region, many of whom are already struggling to find meaningful employment. While technological innovation has always destroyed some jobs while creating others, the process is rarely painless, and the benefits are not automatically shared. Our educational institutions need to prepare students not only for what the labor market looks like today, but for what it might become tomorrow. Education and training programs should focus on developing skills that complement AI rather than those that are substitutable by it, emphasizing creative thinking and complex problem-solving.
Beyond income, how does AI challenge human development and freedoms?
This question leads me to another and more profound concern that may not be captured by the usual way we define and measure human development. The traditional focus on income, health, and education has been useful and remains important, but it is not sufficient. If we are to take Amartya Sen’s capability approach seriously, we must recognize that development is not about accumulating resources but about expanding people’s real freedoms, i.e., their ability to lead lives they have reason to value.
Resources like income, health, and schooling only matter to the extent that individuals have the autonomy and empowerment to turn them into valued outcomes. Freedom to choose, to aspire, to express, and to participate are not luxuries. They are essential elements of human development. Any restriction on these freedoms, whether through poverty, social norms, political repression, or digital manipulation, undermines the expansion of human capabilities.
What risks do digital platforms pose to autonomy, voice, and public discourse?
Unfortunately, AI and digitalization raise new and serious challenges in this regard. They have enabled the rise of a digital economy dominated by a handful of powerful global actors controlling social media platforms. The owners of these platforms can incorporate their own political preferences into the algorithms that control the flow of information. In the Arab context, the recent war in Gaza has clearly demonstrated how tech giants can effectively suppress certain narratives, either by deplatforming users or by rendering their content practically invisible. In this context, the cultural biases of digital capital owners shape what people see, believe, and discuss, with significant implications for public discourse and democratic participation.
Why are these challenges heightened in the Arab region?
The problem is particularly acute in the Arab region, where many citizens already live in highly controlled media environments. Being “at odds” with the worldview of Silicon Valley adds to this structural vulnerability. When global digital platforms filter and distort voices from Arab countries, whether intentionally or through indifference, they reduce the space for contestation and critique. This erosion of voice and agency represents a significant issue in human development. The ability to speak, to dissent, and to tell one’s story are freedoms that matter.
What has changed about surveillance, and why does it matter?
Concerns about surveillance and control are not a new phenomenon. Back in my student days, some decades ago, a few of my comrades were convinced and worried that the national police had us under surveillance. I used to laugh and say, “Sure, they probably want to spy on us, but they have a budget constraint like any government agency. Do you really think you are that important?” It was a way to tease their paranoia while poking fun at our own sense of self-importance.
Today, that joke falls flat. The economics of surveillance have undergone a fundamental shift. With the amount of information people now share voluntarily on social networks and the power of AI to process and cross-reference data, it is not only possible but also cheap for employers, governments, or private actors to monitor large populations. We are entering an era where surveillance is ambient and automatic. This shift changes the balance of power between individuals and institutions in ways we are only beginning to understand.
How does pervasive surveillance erode creativity, autonomy, and development?
Pervasive surveillance poses fundamental threats to human freedom and ultimately other dimensions of human development. When people know their ideas are being monitored, they begin to self-censor, tempering their expressions, avoiding certain associations, and retreating from activities that might invite scrutiny. This self-censorship gradually erodes individual autonomy itself and erodes the spirit that drives creativity and growth. For all these reasons, potential erosion of individual freedom must be protected by robust national and international regulatory frameworks.
How should social protection systems adapt to accommodate all these changes, and what evidence and data do we need?
This question leads us to the central role of social protection policies in countering many of the above-described repercussions. Protecting people’s livelihoods and welfare, as part of fulfilling a basic human right, irrespective of any identity or other factors, minimizes the economic impact or labor market shifts or – at least – delays it, allowing people more time to cope and adjust. Enjoying income security and unconditional access to social services also makes people less susceptible to the social and political effects of digitalization, including surveillance and censorship, as they feel economically safe or empowered.
If we aim to develop social protection systems that adapt to the realities of digital technology and AI, these policies should be evidence-based. They must be monitored, evaluated, and adjusted based on actual outcomes, not assumptions. In the Arab region, we often lack the data needed to do this properly. Household surveys are far too rare, and when they do occur, they are not conducted regularly enough to track meaningful change.
Why is regular, accessible data essential for equitable policy?
This represents a missed opportunity. Any serious attempt to build resilient, equitable, and forward-looking social protection systems must include a commitment to collecting better data regularly, not just on income and labor market status but also on education, health, freedom, and empowerment. Data from household surveys are essential, as they allow us to assess both the overall average in each dimension of human development and the level of socioeconomic inequalities within those dimensions. Any serious measure of human development should take into account not only average achievements but also the distribution of those achievements across different social and economic groups, thus capturing the disproportionate impact on the most vulnerable.
How can qualitative and quantitative research be combined effectively?
Information about freedom and empowerment is crucial for fully capturing the potential impact of AI and digitalization on social protection and human development. This information would likely take the form of ordinal variables, such as responses to Likert-scale questions. However, these questions should not be designed arbitrarily or drawn from the imagination of statisticians or quantitative economists. Instead, they must be grounded in in-depth qualitative research, including key informant interviews and focus group discussions with all social groups. Just as important, once collected, these survey data should be made widely available to quantitative social science researchers, whose analyses can generate valuable insights to inform more effective and equitable policymaking. Only with this information can we begin to understand whether our policies are enhancing or constraining human development and for whom.
What guiding principle should steer social policy in the AI era?
This question takes me back to my starting point: the objective should be to protect and promote human development for all. This requires us to be clear about what is at stake as we enter the AI era: not just jobs and growth, but dignity, autonomy, freedom, and the possibility of a meaningful life.
What kind of policymaking mindset is needed moving forward?
To address the challenges of AI and digitalization effectively, we need social policies that are responsive and inclusive. Policies should not be shaped by the buzzwords of international organizations and colonial or mainstream agencies. We need a genuine commitment to fairness and freedom. This implies that policy evaluation should be grounded in humility and a recognition of the limits of our knowledge. This means that we remain open to constant re-evaluation of the performance of these policies. In the Arab context, this will require not only policy and institutional reform, but also a renewed investment in knowledge: in listening to people, gathering more frequent data and learning from it, and being willing to ask, again and again, whether our development efforts truly expand human capabilities or merely serve the interests of political and economic elites. The question is not whether we can adapt to AI through social protection policies, but whether we can ensure that both AI and social protection serve the cause of inclusive human development.
The views represented in this paper are those of the author(s) and do not necessarily reflect the views of the Arab Reform Initiative, its staff, or its board.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries