Ethics & Policy
Sam Altman Warns of AI Risks, Ethics, and Bubble in Carlson Interview

Altman’s Candid Reflections on AI Ethics
In a revealing interview with Tucker Carlson, OpenAI CEO Sam Altman opened up about the sleepless nights plaguing him amid the rapid evolution of artificial intelligence. Altman confessed that the moral quandaries surrounding his company’s chatbot, ChatGPT, keep him awake, particularly decisions on handling sensitive user interactions like suicide prevention. He emphasized the weight of these choices, noting that OpenAI strives to set ethical boundaries while respecting user privacy.
Altman delved into broader societal impacts, warning of potential “AI privilege” where access to advanced tools could exacerbate inequalities. He called for global input to shape AI’s future, highlighting the need for inclusive regulation to mitigate risks like fraud and even engineered pandemics, as reported in a recent WebProNews article on his predictions for workforce transformation by 2025.
Confronting Conspiracy Theories and Personal Attacks
The conversation took a dramatic turn when Carlson pressed Altman on a conspiracy theory tied to the 2024 death of former OpenAI researcher Suchir Balaji, found with a gunshot wound in his San Francisco apartment. Altman firmly denied any involvement, expressing frustration over baseless accusations that have swirled online. This exchange, detailed in a Moneycontrol report, underscores the intense scrutiny Altman faces as AI’s public figurehead.
Posts on X have amplified these tensions, with users alleging Altman has a history of misleading statements, including claims from former board members about his ousting in 2023 due to dishonesty over safety testing for models like GPT-4. Such sentiments echo broader criticisms, as seen in Wikipedia’s account of his temporary removal from OpenAI, which stemmed from concerns over AI safety and alleged abusive behavior.
Navigating Past Scandals and Industry Rivalries
Altman’s tenure has been marred by high-profile controversies, including a lawsuit from his sister Ann alleging sexual abuse from 1997 to 2006, as covered by the BBC. He has denied these claims, but they add to the narrative of personal and professional turmoil. In the interview, Altman addressed his dramatic 2023 ousting and reinstatement, attributing it to boardroom clashes over leadership and safety priorities.
He also touched on rivalries, particularly with Elon Musk, whom he accused of initially dismissing OpenAI’s prospects before launching a competing venture and filing lawsuits. This feud, highlighted in X posts and a Guardian profile, paints Altman as a resilient but polarizing leader who has outmaneuvered opponents like Musk and dissenting board members.
Vision for AI’s Future Amid Economic Warnings
Looking ahead, Altman expressed optimism about AI’s potential to create “transcendentally good” systems through new computing paradigms, as noted in a Yahoo Finance piece. Yet, he cautioned about an emerging AI bubble, likening it to the dot-com era in a CNBC report from August 2025, amid surging industry investments.
Altman advocated for open-source models to democratize AI, mentioning plans for powerful releases, per discussions at TED events. However, critics on X question his motives, pointing to OpenAI’s shift from nonprofit to for-profit status and price hikes for ChatGPT, which they argue prioritize profits over accessibility.
Balancing Innovation with Societal Safeguards
In addressing workforce changes, Altman predicted significant transformations by 2025, urging preparation for AI-driven disruptions while emphasizing ethical safeguards. He reflected on cultural shifts, preferring phone calls over endless meetings, as sparked in a Times of India debate, suggesting a return to efficient communication in an AI-augmented world.
Ultimately, Altman’s interview reveals a leader grappling with immense power and responsibility. As OpenAI pushes boundaries, from contextual AI awareness to global ethical frameworks, the controversies surrounding him highlight the high stakes of steering humanity’s technological frontier. With regulatory eyes watching and public sentiment divided, as evident in real-time X discussions, Altman’s path forward demands transparency to rebuild trust in an era where AI’s promise and perils are inextricably linked.
Ethics & Policy
Chile faces national debate as proposed bill to regulate AI use advances

“It’s not that a country like Chile aspires to have a seat at the table with the world’s greatest powers, but rather that it already has one,” stated Aisén Etcheverry, Chile’s Minister of Science, Technology, Knowledge and Innovation earlier this year in an interview with France 24.
Her words capture Chile’s growing ambition to advance a pioneering bill to regulate artificial intelligence (AI), sparking a national debate over how to balance innovation with ethics.
The Latin American Artificial Intelligence Index (ILIA) recently confirmed Chile as the regional leader in AI thanks to high levels of investment in technological infrastructure, training programmes and supporting policies.
However, as President Gabriel Boric’s government seeks to expand the use of AI to drive modernization and sustainable growth in the country, there has been a sustained focus on discussions around implementing AI regulations to promote an “ethical, transparent and responsible use of AI for the benefit of all.”
“Some companies see regulations as an opportunity to grow, while others view it as a burden. But in the long run, those who resist innovation will lose ground in the market,” said Sebastian Martinez, General Manager at Nisum Chile, a technology consulting and software development company, while in conversation with Latin America Reports.
The government’s proposed AI Regulation bill, first introduced to Congress in May 2024, was approved by the Chamber of Deputies on August 4, 2025, and has proceeded to the Committee of Future, Science, Technology, Knowledge, and Innovation to widen the conversation by drawing on the views of experts from the public, private, academic and civil society sectors.
Whilst the government maintains that the implementation of this bill would promote innovation and responsible development aligned with international standards, critics warn that tight regulation could instead hinder the technological progress the country aims to achieve.
“Artificial intelligence isn’t a threat; it’s a tool. But unless we invest in educating people about it, fear will dominate, and Chile will miss out on the benefits this technology can bring,” Martinez noted.
The proposed AI regulation bill
President Boric has consistently emphasised the importance of investing in artificial intelligence as a key driver of development in Chile and Latin America, placing ambition, innovation and informed decision-making at the heart of his government’s approach.
Whilst acknowledging the risks posed by AI, the president has underscored the human role and responsibility in regulating advanced technologies to ensure ethical practice.
Speaking at the Congreso Futuro 2024 forum on artificial intelligence, Boric stated: “It is necessary to accompany its development with deep ethical reflection.”
These comments were made shortly before his government introduced its proposed bill on May 7, 2024, aimed at ensuring that the development and use of AI in Chile respects citizens’ rights while also promoting innovation and strengthening the state’s capacity to respond to the risks and challenges posed by the technology.
The bill is aligned with UNESCO’s Recommendation on the Ethics of AI, a framework that guided Chile in becoming the first country worldwide to apply and complete UNESCO’s Readiness Assessment Methodology (RAM).
Audrey Azoulay, Director-General of UNESCO, praised the initiative, stating: “Chile has emerged as a global leader in ethical AI governance, and we are proud that UNESCO has played an essential role in helping achieve this landmark.”
If approved, the bill aims to boost innovation in the business sector, supporting small and medium-sized enterprises (SMEs) especially, by fostering the technological conditions needed for growth, whilst maintaining regulatory oversight of AI systems.
The proposal also seeks to protect Chileans from algorithmic discrimination, a lack of transparency in AI interactions, as well as AI decision-making that could affect fundamental rights in areas such as healthcare, education, law and finance.
Chile’s approach to regulating AI
Chile’s proposed AI regulation adopts a risk-based framework, similar to the EU AI Act, classifying systems into four categories: unacceptable risk, high risk, limited risk, and no evident risk.
Under the proposal, AI systems considered to pose an unacceptable risk would be strictly banned. This includes technologies that undermine human dignity, such as those generating deepfakes or sexual content that exploits vulnerable groups like children and teenagers.
The bill also prohibits systems designed to manipulate emotions and influence decisions without informed consent, as well as those that collect or process facial biometric data without explicit permission.
High-risk AI systems are those that could significantly impact health, safety, fundamental constitutional rights, the environment, or consumer rights. AI tools used in recruitment processes to screen and filter job applications, for instance, fall under this category due to their potential bias and discrimination.
Those deemed to pose limited risk include AI systems that present minimal potential for manipulation, deception, or error in user interactions — such as public service chatbots that respond to queries within its area of competence. At the lowest tier, systems considered to carry no evident risk are tools like recommendation engines for films or music, technologies that under no circumstance pose harm to fundamental rights.
Under this model, AI systems will not require pre-market certification or review. Instead, each company is responsible for assessing and classifying its own systems, according to the established risk categories.
As explained by Minister Etcheverry, cases of non-compliance will lead to administrative sanctions imposed by the future Chilean Data Protection Agency, with decisions open to appeal before the country’s courts.
Innovation or limitation?
Whilst many actors in the public, private and civil society sectors support the proposed AI bill for its emphasis on responsible and ethical use of technology, experts have also raised concerns regarding the risk-based framework’s close alignment with EU standards and the potential bureaucracy that this model could introduce.
Sebastián Dueñas, researcher at the Law, Science and Technology Program at Pontificia Universidad Católica de Chile (UC), criticized the framework for its strict regulations and vague definition of what constitutes a “high-risk” system. He warned that such ambiguity could stifle innovation, discouraging developers who fear heavy sanctions.
The framework’s similarity to the EU AI Act has also raised doubts given the substantial differences between the Chilean context and that of the EU. Matías Aránguiz, professor at the Faculty of Law and deputy director of the Law, Science and Technology Program at UC, highlighted the disparity in budget and personnel as a major challenge in effectively implementing a similar risk-based regulatory approach in Chile.
In August, the Santiago Chamber of Commerce, a non-profit trade association representing over 2,500 companies across Chile’s key economic sectors, expressed concern about the bill’s potential impact.
The association warned that the rigidity of the proposed risk-based framework could negatively affect technological development, investment, and national competitiveness.
The association emphasized the need to foster responsible AI development in Chile whilst avoiding overly restrictive regulations that could limit innovation and the technology’s transformative potential.
Echoing this view, Dueñas commented: “Regulating AI is necessary, but doing so with the same rigidity as the European Union—just as they are trying to soften their own framework—would only add friction to Chilean development.”
For Martinez, on the other hand, what’s most needed is investment, rather than regulation. “Chile urgently needs to invest in AI. Without it, we risk falling further behind the U.S., and the gap between our markets will only continue to widen,” he stressed.
The government’s proposed AI regulation bill reflects more than two years of collaborative work, with input from the national AI Expert Committee, congressional commissions and members of both academia and civil society alike.
However, the debate continues as actors from diverse sectors convened on August 14 to highlight both the progress made and the complexities that remain in navigating this technological challenge.
This article was originally published by Nadia Hussain on Latin America Reports and was re-published with permission.
Ethics & Policy
Guerra publishes on AI ethics and blockchain technology

Katia Guerra, assistant professor of information technology management, has had a series of her recent academic contributions published on subjects spanning ethical artificial intelligence, AI system adoption and blockchain technology. Guerra’s work highlights the multifaceted nature of modern technological research.
Guerra published two of her papers in the AMCIS 2025 Proceedings, “Ethical AI Design and Implementation: A Systematic Literature Review,” which aims to showcase how AI can be implemented ethically in order to comply with new rules and guidelines set by major governing bodies.
Guerra’s second paper is one she co-authored titled, “AI Self-diagnosis Systems Adoption: A Socio Technical Perspective.” This paper explores the environmental and technological factors at play when adopting AI self diagnosis systems. Both papers address significant aspects of AI development and implementation.
Additionally, Guerra had her work published in the International Review of Law, Computers, & Technology. She co-authored “Blockchain technology: an analysis of economic, technical, and legal implications.” This paper details how blockchain technology is not yet ready to replace traditional business transactions, as it does not fully adhere to existing legal rules. The paper goes on to highlight how that can begin to change.
Ethics & Policy
Personalization and Ethics by 2025

The Rise of AI in Tailored Customer Experiences
In the fast-evolving world of marketing, artificial intelligence is reshaping how brands connect with consumers, particularly through hyper-personalized strategies that anticipate needs before they’re even voiced. As we approach 2025, companies are leveraging AI to analyze vast datasets, from browsing histories to purchase patterns, creating experiences that feel uniquely individual. This shift isn’t just about efficiency; it’s about building loyalty in an era where generic ads no longer cut through the noise.
Take Netflix, for instance, which uses AI algorithms to recommend content based on viewing habits, a model that’s inspired countless marketers. Similarly, e-commerce giants like Amazon employ predictive analytics to suggest products, boosting conversion rates significantly. According to insights from HubSpot’s blog on AI personalization strategies, these tactics can increase customer engagement by up to 20%, emphasizing the need for real-time data processing to deliver relevant offers instantaneously.
Predictive Analytics and Real-Time Adaptation
Predictive analytics stands at the forefront of this transformation, enabling marketers to forecast consumer behavior with remarkable accuracy. By 2025, AI tools are expected to integrate seamlessly with customer relationship management systems, allowing for dynamic content adjustments on the fly. For example, if a user abandons a cart, AI can trigger personalized emails with tailored discounts, drawing from past interactions to optimize timing and messaging.
Recent reports highlight this trend’s momentum. A piece from ContentGrip notes that the global AI in marketing market is projected to hit $47.32 billion in 2025, driven by a 36.6% compound annual growth rate. This growth underscores how AI isn’t merely automating tasks but enhancing strategic decision-making, with 88% of marketers already incorporating it into daily workflows.
Ethical Considerations in Data-Driven Personalization
Yet, as AI delves deeper into personalization, ethical concerns loom large. Privacy regulations like GDPR and emerging U.S. laws demand transparent data usage, pushing brands to balance customization with consent. Marketers must ensure algorithms avoid biases that could alienate segments of their audience, a point stressed in discussions on X where users debate AI’s role in fair marketing practices.
McKinsey’s analysis in their report on the next frontier of personalized marketing warns that without ethical frameworks, trust could erode. They advocate for generative AI to craft tailored narratives while respecting user boundaries, a strategy that’s gaining traction among industry leaders.
Integration with Emerging Technologies
Looking ahead, AI personalization is merging with technologies like augmented reality and voice search to create immersive experiences. Imagine virtual try-ons personalized via AI, or voice assistants recommending products based on conversational cues. Trends from ON24’s predictions for 2025 suggest that machine learning will refine these interactions, anticipating needs through pattern recognition and delivering hyper-dynamic content.
News from WebProNews echoes this, detailing how AI-driven strategies in sales and marketing are focusing on sustainability and data privacy for 2025. Their article on digital marketing trends highlights omnichannel optimization, where AI forecasts engagement across platforms, blending automation with human creativity for agile campaigns.
Challenges and Future Outlook
Despite the promise, challenges persist, including the high costs of AI implementation and the need for skilled talent. Small businesses, in particular, may struggle to compete, but accessible tools from providers like Google and IBM are democratizing access. Posts on X from influencers like Greg Isenberg discuss “vibe marketing,” where AI agents generate content calendars, revolutionizing planning and testing.
Ultimately, as Dotdigital’s blog on personalization in 2025 posits, the key lies in combining AI insights with authentic connections. Brands that master this will not only personalize but humanize their marketing, fostering long-term relationships in a data-saturated world. With innovations accelerating, 2025 could mark the pinnacle of customer-centric strategies, where AI turns every interaction into a meaningful dialogue.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries