Tools & Platforms
Experian’s Lintner Discusses AI Transformation at the Credit Bureau
Alex Lintner, CEO of Experian Software and Technology, lays out how the credit rating and business services company has transformed to be tech-driven, with its fair share of AI in the mix.
He discusses how Experian utilizes generative AI in such areas as customer engagement, chatbots and other tools to offer financial guidance, including credit education.
Of course, AI cannot run amok and unsupervised through Experian’s vast library of documents and other data. Lintner lays out some of the oversight and guardrails set before AI gets to work. Furthermore, he talks about the use of small language models where appropriate and being mindful of generative AI’s occasional eagerness to produce answers, even to the point of hallucinations.
Tools & Platforms
Symrise unveils new global data and AI hub in Barcelona

Symrise has launched its new global data and AI hub in Barcelona, marking a major milestone in its digitalisation strategy.
The pioneering centre, named the #BCN Hub, will create a dedicated platform to develop advanced data-driven solutions that strengthen innovation across Symrise’s core customer markets.
Around 25 postgraduates have joined an intensive 12-month programme designed to combine structured learning with hands-on experience, laying the foundation for a sustainable in-house data and AI capability.
Symrise selected Barcelona for its outstanding talent pool, strong academic institutions and vibrant technology ecosystem.
The hub provides an inspiring environment in which teams can design ideas, build prototypes and transform them into scalable solutions that deliver measurable benefits for customers in highly competitive markets.
The #BCN Hub will enable Symrise to create digital solutions that help customers respond faster to consumer trends, optimise product development and ensure supply chain resilience.
By focusing on six key business areas – finance, research and development, procurement, supply chain, consumer insights and sustainability – the hub will support the delivery of high-quality and sustainable products.
The approach empowers manufacturers to accelerate their innovation cycles, improve transparency and bring personalised, trend-driven products to market more efficiently.
“The #BCN Hub will become an in-house centre for innovation, collaboration and transformation at Symrise,” said Eliza Millet, Chief Digital and Information Officer.
“It will allow us to accelerate product development, strengthen strategic business planning and boost operational efficiency.” Chief Transformation Officer Nick Russel added.
“Digitalisation goes beyond advancing projects – it opens new ways to approach challenges creatively and deliver value to our customers.”
The programme welcomes graduates from diverse fields such as computer science, physics and bioinformatics, with participants bringing strong analytical skills and the ability to adapt quickly to complex topics.
Symrise offers intensive mentoring from experienced experts in digital transformation and analytics, combining structured training with real-world projects.
This personalised guidance accelerates learning and creates long-term career opportunities within the company, ensuring a steady pipeline of digital talent to support customer-focused innovation.
The hub actively engages with Barcelona’s ecosystem, with Symrise collaborating with the Universitat Autònoma de Barcelona, participating in recruitment events and partnering with technology leaders such as Microsoft.
The company also plans community events and meetups to foster knowledge exchange within the tech scene.
On the technology side, the hub leverages Symrise’s established platforms that include the following:
- Atl@s Lakehouse built on Databricks for advanced analytics
- UiPath for process automation
- Mendix for rapid application development.
This combination ensures fast prototyping and smooth scaling of solutions that enhance efficiency and responsiveness across the value chain.
With the #BCN Hub, Symrise aims to expand its digital capabilities, strengthen its global presence and continuously deliver innovative solutions that help its customers succeed in dynamic markets.
Tools & Platforms
OpenAI to burn through $115B by 2029 – Computerworld

The AI emperor is not wearing any clothes — I mean, seriously, he’s as naked as a jaybird. Or to be more precise, the big-name AI companies are burning through unprecedented amounts of money this year. With no clear plan to make a profit anytime soon. (Sure, Nvidia is coining cash with its chip foundries, but the AI software companies are another matter.)
For example, we now know — thanks to analysis from The Information — that OpenAI will burn $115 billion (that’s billion with a capital B) by 2029. That’s up $80 billion from previous estimates. On top of that, OpenAI has ordered $10 billion of Broadcom’s yet-to-ship AI chips for its yet-to-break-ground proprietary data centers. Oh, and there’s the $500 billion that OpenAI and friends SoftBank, Oracle, and MGX are already committed to spending on the Stargate Project AI data centers.
It’s not just OpenAI. Meta, Amazon, Alphabet, and Microsoft will collectively spend up to $320 billion in 2025 on AI-related technologies. Amazon alone aims to invest more than $100 billion on its AI initiatives, while Microsoft will dedicate $80 billion to datacenter expansion for AI workloads. And Meta’s CEO has set an AI budget of around $60 billion for this year.
Tools & Platforms
Babylon Bee 1, California 0: Court Strikes Down Law Regulating Election-Related AI Content | American Enterprise Institute

By reducing traditional barriers of content creation, the AI revolution holds the potential to unleash an explosion in creative expression. It also increases the societal risks associated with the spread of misinformation. This tension is the subject of a recent landmark judicial decision, Babylon Bee v Bonta (hat tip to Ajit Pai, whose social media account remains an outstanding follow). The eponymous satirical news site and others challenged California’s AB 2839, which prohibited the dissemination of “materially deceptive” AI-generated audio or video content related to elections. Although the court recognized that the case presented a novel question about “synthetically edited or digitally altered” content, it struck down the law, concluding that the rise of AI does not justify a departure from long-standing First Amendment principles.
AB 2839 was California’s attempt to regulate the use of AI and other digital tools in election-related media. The law defined “materially deceptive” content as audio or visual material that has been intentionally created or altered so that a reasonable viewer would believe it to be an authentic recording. It applied specifically to depictions of candidates, elected officials, election officials, and even voting machines or ballots, where the altered content was “reasonably likely” to harm a candidate’s electoral prospects or undermine public confidence in an election. While the statute carved out exceptions for candidates making deepfakes of themselves and for satire or parody, those exceptions required prominent disclaimers stating that the content had been manipulated.
The court recognized that the electoral context raises the stakes for both parties. Because the law regulated speech on the basis of content, the court applied strict scrutiny: The law is constitutional only if it serves a compelling governmental interest and is the least restrictive means of protecting that interest. On the one hand, the court recognized that the state has a compelling interest in preserving the integrity of its election process. California noted how AI-generated robocalls purporting to be from President Biden encouraged New Hampshire voters not to go to the polls during the 2024 primary. But on the other hand, the Supreme Court has recognized that political speech occupies the “highest rung” of First Amendment protection. That tension is the opinion’s throughline: While elections justify significant regulation, they also demand the most protection for individual speech.
But it ultimately held that California struck the wrong balance. The state argued that the bill was a logical extension of traditional harmful speech regulations such as defamation or fraud. But the court ruled that the law reached much further. It did not limit liability to instances of actual harm, but to any content “reasonably likely” to cause material harm. And importantly, it did not limit recovery to actual candidate victims, but instead allowed any recipient of allegedly deceptive content to sue for damages. This private right of action deputized roving “censorship czars” across the state whose malicious or politically motivated suits risk chilling a broad swath of political expression.
Given this breadth, the court found the law’s safe harbor was insufficient. The law exempted satirical content (such as that produced by the Bee) as long as it carried a disclaimer that the content was digitally manipulated, in accordance with the act’s formatting requirements. But the court found that this compelled disclosure was itself unconstitutional, as it drowned out the plaintiff’s message: “Put simply, a mandatory disclaimer for parody or satire would kill the joke.” This was especially true in contexts such as mobile devices, where the formatting requirements meant the disclaimer would take up the entire screen—a problem that I have discussed elsewhere in the context of Federal Trade Commission disclaimer rules.
Perhaps most importantly, the court recognized the importance of counter-speech and market solutions as alternative remedies to disinformation. It credited crowd-sourced fact-checking such as X’s community notes, and AI tools such as Grok, as scalable solutions already being adopted in the marketplace. And it noted that California could fund AI educational campaigns to raise awareness of the issue or form committees to combat disinformation via counter-speech.
The court’s emphasis on private, speech-based solutions points the way forward for other policymakers wrestling with deepfakes and other AI-generated disinformation concerns. Private, market-driven solutions offer a more constitutionally sound path than empowering the state to police truth and risk chilling protected expression. The AI revolution is likely to disrupt traditional patterns of content creation and dissemination in society. But fundamental First Amendment principles are sufficiently flexible to adapt to these changes, just as they have when presented with earlier disruptive technologies. When presented with problematic speech, the first line of defense is more speech—not censorship. Efforts at the state or federal level to regulate AI-generated content should respect this principle if they are to survive judicial scrutiny.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi