Tools & Platforms
Promoting The Export of the American AI Technology Stack – The White House

By the authority vested in me as President by the Constitution and the laws of the United States of America, including section 301 of title 3, United States Code, it is hereby ordered:
Section 1. Purpose. Artificial intelligence (AI) is a foundational technology that will define the future of economic growth, national security, and global competitiveness for decades to come. The United States must not only lead in developing general-purpose and frontier AI capabilities, but also ensure that American AI technologies, standards, and governance models are adopted worldwide to strengthen relationships with our allies and secure our continued technological dominance. This order establishes a coordinated national effort to support the American AI industry by promoting the export of full-stack American AI technology packages.
Sec. 2. Policy. It is the policy of the United States to preserve and extend American leadership in AI and decrease international dependence on AI technologies developed by our adversaries by supporting the global deployment of United States-origin AI technologies.
Sec. 3. Establishment of the American AI Exports Program. (a) Within 90 days of the date of this order, the Secretary of Commerce shall, in consultation with the Secretary of State and the Director of the Office of Science and Technology Policy (OSTP), establish and implement the American AI Exports Program (Program) to support the development and deployment of United States full-stack AI export packages.
(b) The Secretary of Commerce shall issue a public call for proposals from industry-led consortia for inclusion in the Program. The public call shall require that each proposal must:
(i) include a full-stack AI technology package, which encompasses:
(A) AI-optimized computer hardware (e.g., chips, servers, and accelerators), data center storage, cloud services, and networking, as well as a description of whether and to what extent such items are manufactured in the United States;
(B) data pipelines and labeling systems;
(C) AI models and systems;
(D) measures to ensure the security and cybersecurity of AI models and systems; and
(E) AI applications for specific use cases (e.g., software engineering, education, healthcare, agriculture, or transportation);
(ii) identify specific target countries or regional blocs for export engagement;
(iii) describe a business and operational model to explain, at a high level, which entities will build, own, and operate data centers and associated infrastructure;
(iv) detail requested Federal incentives and support mechanisms; and
(v) comply with all relevant United States export control regimes, outbound investment regulations, and end-user policies, including chapter 58 of title 50, United States Code, and relevant guidance from the Bureau of Industry and Security within the Department of Commerce.
(c) The Department of Commerce shall require proposals to be submitted no later than 90 days after the public call for proposals is issued, and shall consider proposals on a rolling basis for inclusion in the Program.
(d) The Secretary of Commerce shall, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of OSTP, evaluate submitted proposals for inclusion under the Program. Proposals selected by the Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of OSTP, will be designated as priority AI export packages and will be supported through priority access to the tools identified in section 4 of this order, as consistent with applicable law.
Sec. 4. Mobilization of Federal Financing Tools. (a) The Economic Diplomacy Action Group (EDAG), established in the Presidential Memorandum of June 21, 2024, chaired by the Secretary of State, in consultation with the Secretary of Commerce and the United States Trade Representative, and as described in section 708 of the Championing American Business Through Diplomacy Act of 2019 (Title VII of Division J of Public Law 116-94) (CABDA), shall coordinate mobilization of Federal financing tools in support of priority AI export packages.
(b) I delegate to the Administrator of the Small Business Administration and the Director of OSTP the authority under section 708(c)(3) of CABDA to appoint senior officials from their respective executive departments and agencies to serve as members of the EDAG.
(c) The Secretary of State, in consultation with the EDAG, shall be responsible for:
(i) developing and executing a unified Federal Government strategy to promote the export of American AI technologies and standards;
(ii) aligning technical, financial, and diplomatic resources to accelerate deployment of priority AI export packages under the Program;
(iii) coordinating United States participation in multilateral initiatives and country-specific partnerships for AI deployment and export promotion;
(iv) supporting partner countries in fostering pro‑innovation regulatory, data, and infrastructure environments conducive to the deployment of American AI systems;
(v) analyzing market access, including technical barriers to trade and regulatory measures that may impede the competitiveness of United States offerings; and
(vi) coordinating with the Small Business Administration’s Office of Investment and Innovation to facilitate, to the extent permitted under applicable law, investment in United States small businesses to the development of American AI technologies and the manufacture of AI infrastructure, hardware, and systems.
(d) Members of the EDAG shall deploy, to the maximum extent permitted by law, available Federal tools to support the priority export packages selected for participation in the Program, including direct loans and loan guarantees (12 U.S.C. 635); equity investments, co-financing, political risk insurance, and credit guarantees (22 U.S.C. 9621); and technical assistance and feasibility studies (22 U.S.C. 2421(b)).
Sec. 5. General Provisions. (a) Nothing in this order shall be construed to impair or otherwise affect:
(i) the authority granted by law to an executive department or agency, or the head thereof; or
(ii) the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.
(b) This order shall be implemented consistent with applicable law and subject to the availability of appropriations.
(c) This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.
(d) The costs for publication of this order shall be borne by the Department of Commerce.
DONALD J. TRUMP
THE WHITE HOUSE,
July 23, 2025.
Tools & Platforms
Implementing Next-Generation AI for Impact

AI in Finance 2025, will take place at the iconic Kimpton Fitzroy London on Wednesday 26th November, and will provide a roadmap for 150+ senior technology leaders from financial services to move beyond the well-established use cases and implement next-generation AI at scale.
Organised by the team behind London Tech Week, the event features an impressive lineup of speakers who are directly responsible for driving their organisation’s AI strategy.
Headline speakers include:
- Dara Sosulski, Managing Director & Head of Artificial Intelligence and Model Management, HSBC
- Christoph Rabenseifner, Chief Strategy Officer for Technology, Data and Innovation, Deutsche Bank
- Kirsten Mycroft, Chief Privacy & Responsible AI Officer, BNY
- Morgane Peng, Managing Director, Head of Product Design & AI Lead, Societe Generale
- Nitin Kulkarni, CIO for Data Platforms, Data Engineering, and AI Centre of Excellence, Nationwide Building Society
- Elena Strbac, Managing Director, Global Head of Data Science & Innovation, Standard Chartered
- Neil Boston, Co-Group Head of Emerging Technology, UBS
These speakers (and others) will be discussing only the most business-critical AI topics including implementing and scaling impactful agentic AI, creating hyper-personalised customer experiences and driving enterprise-wide adoption.
Attendees can expect to meet senior AI, Technology, Data & Analytics leaders from leading financial services institutions across the UK and Europe – including HSBC, J.P. Morgan, Citi, Starling Bank, Barclays, and others.
Key Highlights:
- Hear from tech leaders driving their company’s most advanced AI projects; specifically, the success stories and key lessons learned from their AI journey to date.
- Discover the latest cutting-edge AI solutions that can help address your unique business needs
- Build connections with fellow AI leaders in financial services via our interactive-first set up (multiple focused breakouts, an app to facilitate onsite meetings & onstage Q+A and live polling)
- Be exposed to fresh ideas from foreign banks and leaders in other highly regulated industries on how to tackle common AI challenges
AI in Finance 2025 promises to be a pivotal gathering for professionals across banking, fintech, and investment sectors, offering actionable strategies to harness AI for competitive advantage.
Registration: Discover the full speaker line up, agenda & range of registration options via our brochure today!
Tools & Platforms
Quantum, Blockchain, and Key Challenges

The Surge of AI in Financial Services
In the rapidly evolving world of financial technology, artificial intelligence is not just a tool but a transformative force reshaping how banks and insurers operate. According to a detailed report in the Financial Times, AI-driven innovations are accelerating decision-making processes, from risk assessment to customer personalization, with major players like JPMorgan Chase investing billions in machine learning algorithms that predict market shifts with unprecedented accuracy. This shift is driven by the need to handle vast data volumes in real time, where traditional methods fall short.
Beyond banking, AI’s integration with edge computing is enabling instant actions in remote operations, as highlighted in posts on X from tech analysts who note its role in reducing latency for fraud detection. For instance, systems that process transactions at the point of sale are cutting fraud losses by up to 30%, according to insights shared by industry observers on the platform, emphasizing how this tech duo is becoming indispensable for secure, efficient financial ecosystems.
Quantum Computing’s Disruptive Potential
The rise of quantum computing represents another seismic shift, promising to solve complex financial models that classical computers struggle with. A Forbes Council post from April 2025 details how quantum tech could optimize portfolio management by simulating countless scenarios in seconds, a capability that’s drawing heavy investment from firms like Goldman Sachs. This trend aligns with broader industry moves toward advanced analytics, where quantum’s power addresses longstanding challenges in encryption and optimization.
Meanwhile, Capgemini’s TechnoVision 2025 report underscores quantum’s synergy with AI in financial services, forecasting its widespread adoption by 2030. Insiders warn, however, of the cybersecurity risks, as quantum could crack current encryption standards, prompting a race to develop quantum-resistant protocols as discussed in recent web analyses from cybersecurity experts.
Tokenization and Blockchain Innovations
Tokenization of assets is emerging as a key innovation, turning illiquid holdings like real estate into tradable digital tokens on blockchain networks. The Financial Times article explores how this democratizes investment, allowing fractional ownership and faster settlements, with examples from startups partnering with traditional banks to tokenize bonds and equities. This not only boosts liquidity but also reduces intermediary costs, potentially saving the industry trillions annually.
X posts from investment advisors highlight tokenization’s role in decentralized finance, with trends pointing to its integration with renewable energy projects for sustainable funding. Plaid’s insights on fintech trends further elaborate that by 2025, tokenization will underpin new consumer tools, enabling seamless cross-border payments and micro-investments, though regulatory hurdles remain a focal point in ongoing discussions.
Sustainability and Ethical AI Challenges
Sustainability is weaving into tech trends, with AI optimizing energy use in data centers that power financial operations. A McKinsey report from 2025 identifies agentic AI—autonomous systems—as a game-changer for enterprise innovation, helping firms like Visa reduce carbon footprints through smarter resource allocation. Yet, ethical concerns loom large, as biases in AI models could exacerbate inequalities in lending practices.
Web sources, including Outlook Money’s overview of tech trends in finance, stress the importance of robust governance to mitigate these risks. Industry insiders on X echo this, calling for transparent AI frameworks to ensure fair outcomes, especially as telemedicine and mental health apps intersect with financial wellness tools.
Navigating Geopolitical and Talent Gaps
Geopolitical tensions are complicating supply chains for semiconductors critical to these technologies. Smart Sync Investment Advisory Services on X notes the fragility despite massive capital expenditures, with export controls potentially delaying AI hardware advancements. This underscores the need for diversified sourcing strategies in an era where chips power everything from quantum simulations to blockchain ledgers.
Talent shortages in areas like AI design and ethical hacking pose another barrier, as per BigID’s white paper on 2025 tech challenges. Firms are ramping up training programs, but the gap persists, threatening innovation pace. As the Financial Times piece concludes, overcoming these hurdles will define which players thrive in this high-stakes arena, blending cutting-edge tech with strategic foresight to forge resilient financial futures.
Tools & Platforms
How to write an AI ethics policy for the workplace
If there is one common thread throughout recent research about AI at work, it’s that there is no definitive take on how people are using the technology — and how they feel about the imperative to do so.
Language learning models can be used to draft policies, generative AI can be used for image creation, and machine learning can be used for predictive analytics, Ines Bahr, a senior Capterra analyst who specializes in HR industry trends, told HR Dive via email.
Still, there’s a lack of clarity around which tools should be used and when, because of the broad range of applications on the market, Bahr said. Organizations have implemented these tools, but “usage policies are often confusing to employees,” Bahr told HR Dive via email — which leads to the unsanctioned but not always malicious use of certain tech tools.
The result can be unethical or even allegedly illegal actions: AI use can create data privacy concerns, run afoul of state and local laws and give rise to claims of identity-based discrimination.
Compliance and culture go hand in hand
While AI ethics policies largely address compliance, culture can be an equally important component. If employers can explain the reasoning behind AI rules, “employees feel empowered by AI rather than threatened,” Bahr said.
“By guaranteeing human oversight and communicating that AI is a tool to assist workers, not replace, a company creates an environment where employees not only use AI compliantly but also responsibly” Bahr added.
Kevin Frechette, CEO of AI software company Fairmarkit, emphasized similar themes in his advice for HR professionals building an AI ethics policy.
The best policies answer two questions, he said: “How will AI help our teams do their best work, and how will we make sure it never erodes trust?”
“If you can’t answer how your AI will make someone’s day better, you’re probably not ready to write the policy,” Frechette said over email.
Many policy conversations, he said, are backward, prioritizing the technology instead of the workers themselves: “An AI ethics policy shouldn’t start with the model; it should start with the people it impacts.”
Consider industry-specific issues

A model of IBM Quantum during the inauguration of Europe’s first IBM Quantum Data Center on Oct. 1, 2024, in Ehningen, Germany. The center provides cloud-based quantum computing for companies, research institutions and government agencies.
Thomas Niedermueller via Getty Images
Industries involved in creating AI tools have additional layers to consider: Bahr pointed to research from Capterra that revealed that software vulnerabilities were the top cause of data breaches in the U.S. last year.
“AI-generated code or vibe coding can present a security risk, especially if the AI model is trained on public code and inadvertently replicates existing vulnerabilities into new code,” Bahr explained.
An AI disclosure policy should address security risks, create internal review guidelines for AI-generated code, and provide training to promote secure coding practices, Bahr said.
For companies involved in content creation, an AI disclosure could be required and should address how workers are responsible for the final product or outcome, Bahr said.
“This policy not only signals to the general public that human input has been involved in published content, but also establishes responsibilities for employees to comply with necessary disclosures,” Bahr said.
“Beyond fact-checking, the policy needs to address the use of intellectual property in public AI tools,” she said. “For example, an entertainment company should be clear about using an actor’s voice to create new lines of dialogue without their permission.”
Likewise, a software sales representative could be able to explain to clients how AI is used in the company’s products. Customer data use can also be a part of disclosure policy, for example.
The policy’s in place. What now?
Because AI technology is constantly evolving, employers must remain flexible, experts say.
“A static AI policy will be outdated before the ink dries,” according to Frechette of Fairmarkit. “Treat it like a living playbook that evolves with the tech, the regulations, and the needs of your workforce,” he told HR Dive via email.
HR also should continue to test the AI policies and update them regularly, according to Frechette. “It’s not about getting it perfect on Day One,” he said. “It’s about making sure it’s still relevant and effective six months later.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi