Tools & Platforms
Samsung says no immediate plans to charge users for Galaxy AI, focus is on driving adoption

Samsung has no immediate plans to charge users for its suite of Galaxy AI features, with the company’s current focus squarely on driving widespread adoption, a top executive said. The clarification comes as the South Korean technology giant aggressively integrates artificial intelligence across its entire device ecosystem, from the latest foldable smartphones like the Galaxy Z Fold 7 to tablets like the Galaxy Tab S11 Ultra.
“Adoption of technology is the primary goal…I don’t see monetisation happening in recent time,” said Aditya Babbar, Vice President, MX Business, Samsung India. “Our focus is more on the adoption side. But we continue to closely monitor this, and we will go as per the industry trends.”
Babbar’s comments underscore Samsung’s strategy to make advanced features accessible to millions, thereby strengthening its ecosystem. This push for “AI everywhere” is a central theme for the company, which recently celebrated the successful launch of the Fold 7, securing 2,10,000 pre-bookings in India.
A ‘consumer-first’ approach to AI
Samsung’s strategy for developing AI is rooted in a “consumer-first approach,” aiming to solve real-world problems and reduce friction in daily tasks. Instead of building technology for its own sake, the company identifies user needs in areas like communication, productivity, and photography, and then applies AI to enhance the experience.
“We put this segmentation of consumer use first and then [see] how AI can empower them to do it in the best way,” Babbar explained. For instance, the need to understand a caller speaking a different language led to the creation of Live Translate.
This philosophy is particularly evident in the camera. Samsung identifies three key moments for a user: the click, the edit, and the share. While the click is instantaneous, the editing process is where users spend significant time and where AI offers the most powerful tools.
“The need is that I want to erase something, I want to enhance this picture… While you are clicking, controlling some of that is almost impossible,” he noted. This is where on-device generative AI comes in, allowing users to remove unwanted objects or even realistically reconstruct parts of an image, like a face obscured by a hand – a task many AI solutions struggle with.
Democratising the AI experience
Samsung is customising its AI features based on device form factors to align with user behaviour. For instance, on the larger screen of a Galaxy Fold or a tablet, productivity features like summarising a 200-page PDF or transcribing audio are highly adopted. On the other hand, flip-style devices see more usage of creative AI tools, while tablets are heavily used for features like ‘Sketch to Image’.
To ensure users can experience these capabilities firsthand, Samsung has made a significant ground-level investment. The company has set up over 20,000 experience zones across the country and equipped them with Wi-Fi to overcome internet connectivity hurdles during live demos.
“The moment of truth happens to be a very, very important belief for the consumer,” Babbar said. “A lot of investment has gone into training people… preparing them with [knowledge] beyond device hardware specs into the experience of AI.” With a robust retail strategy and even bigger plans for the upcoming festive season, Samsung is focused on one thing: getting Galaxy AI into the hands of as many users as possible.
– Ends
Tools & Platforms
AI Lies Because It’s Telling You What It Thinks You Want to Hear

Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.
While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth.
AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).
In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different.
“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”
Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble
How machines learn to lie
To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained.
There are three phases of training LLMs:
- Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
- Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
- Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.
The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators.
LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers.
Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us.
“Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.”
The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.
The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.
Getting AI to be honest
Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.
The Princeton researchers identified five distinct forms of this behavior:
- Empty rhetoric: Flowery language that adds no substance to responses.
- Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
- Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
- Unverified claims: Making assertions without evidence or credible support.
- Sycophancy: Insincere flattery and agreement to please.
To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”
This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.
Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.
“It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”
AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?
Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI
Tools & Platforms
AI: The Church’s Response to the New Technological Revolution

Artificial intelligence (AI) is transforming everyday life, the economy, and culture at an unprecedented speed. Capable of processing vast amounts of data, mimicking human reasoning, learning, and making decisions, this technology is already part of our daily lives: from recommendations on Netflix and Amazon to medical diagnoses and virtual assistants.
But its impact goes far beyond convenience or productivity. Just as with the Industrial Revolution, the digital revolution raises social, ethical, and spiritual questions. The big question is: How can we ensure that AI serves the common good without compromising human dignity?
A change of era
Pope Francis has described artificial intelligence as a true “epochal change,” and his successor, Pope Leo XIV, has emphasized both its enormous potential and its risks. There is even talk of a future encyclical entitled Rerum Digitalium, inspired by the historic Rerum Novarum of 1891, to offer moral guidance in the face of the “new things” of our time.
The Vatican insists that AI should not replace human work, but rather enhance it. It must be used prudently and wisely, always putting people at the centre. The risks of inequalities, misinformation, job losses, and military uses of this technology necessitate clear limits and global regulations.
The social doctrine of the Church and AI
The Church proposes applying the four fundamental principles of social doctrine to artificial intelligence :
-
Dignity of the person: the human being should never be treated as a means, but as an end in itself.
-
Common good: AI must ensure that everyone has access to its benefits, without exclusions.
-
Solidarity: Technological development must serve the most needy in particular.
-
Subsidiarity: problems should be solved at the level closest to the people.
Added to this are the values of truth, freedom, justice, and love , which guide any technological innovation towards authentic progress.
Opportunities and risks
Artificial intelligence already offers advances in medicine, education, science, and communication. It can help combat hunger, climate change, or even better convey the Gospel. However, it also poses risks:
-
Massive job losses due to automation.
-
Human relationships replaced by fictitious digital links.
-
Threats to privacy and security.
-
Use of AI in autonomous weapons or disinformation campaigns.
Therefore, the Church emphasizes that AI is not a person: it has no soul, consciousness, or the capacity to love. It is merely a tool, powerful but always dependent on the purposes assigned to it by humans.
A call to responsibility
The Antiqua et nova (2025) document reminds us that all technological progress must contribute to human dignity and the common good. Responsibility lies not only with governments or businesses, but also with each of us, in how we use these tools in our daily lives.
Artificial intelligence can be an engine of progress, but it can never be a substitute for humankind. No machine can experience love, forgiveness, mercy, or faith. Only in God can perfect intelligence and true happiness be found.
Tools & Platforms
2025 PCB Market to Surpass $100B Driven by AI Servers and EVs

In the fast-evolving world of technology, 2025 is shaping up to be a pivotal year for breakthroughs in printed circuit boards, or PCBs, which form the backbone of everything from AI servers to automotive systems. Industry forecasts point to a global PCB market exploding past $100 billion, driven by surging demand for high-density interconnect (HDI) technology and innovative materials like low dielectric constant (Low Dk/Df) substrates that enhance signal integrity in high-speed applications.
This growth isn’t just about volume; it’s fueled by strategic shifts in manufacturing, where companies are investing heavily in automation and sustainable practices to meet regulatory pressures and supply chain disruptions. For insiders, the real story lies in how these advancements are reshaping sectors like electric vehicles, where PCBs must withstand extreme conditions while supporting advanced driver-assistance systems.
As we delve deeper into the PCB boom, experts highlight AI server boards as a key driver, with projections from sources like UGPCB indicating a 15-20% compound annual growth rate through the decade, propelled by data center expansions from tech giants like Nvidia and Amazon.
Beyond PCBs, broader technology trends for 2025 underscore the rise of artificial intelligence as a transformative force across industries. Gartner’s latest analysis identifies AI governance, agentic AI, and post-quantum cryptography as top strategic priorities, emphasizing the need for businesses to balance innovation with ethical oversight amid increasing regulatory scrutiny.
These trends extend to cybersecurity, where post-quantum solutions are gaining traction to counter threats from quantum computing, potentially rendering current encryption obsolete. For enterprise leaders, this means reallocating budgets toward resilient infrastructures, with investments in AI-driven threat detection systems expected to surge by 25% according to industry reports.
In a comprehensive overview shared via Medium, analyst Mughees Ahmad breaks down how trends like AI TRiSM (trust, risk, and security management) will redefine corporate strategies, urging firms to integrate these into their core operations for competitive edges in volatile markets.
Collaboration between tech firms and media is also amplifying these discussions, as seen in recent partnerships that blend data insights with journalistic depth. At the World Economic Forum in 2025, Tech Mahindra teamed up with Wall Street Journal Intelligence to unveil “The Tech Adoption Index,” a report that quantifies how enterprises are embracing emerging technologies, revealing adoption rates in AI and cloud computing hovering around 60% in leading sectors.
This index highlights disparities, with healthcare and finance outpacing manufacturing in tech integration, offering a roadmap for laggards. Insiders note that such collaborations are crucial for demystifying complex trends, providing actionable intelligence amid economic uncertainties.
Drawing from the Morningstar coverage of the launch, the report underscores that regions like the Middle East are becoming hubs for tech discourse, with Qatar set to host The Wall Street Journal’s Tech Live conference annually starting this year, attracting global innovators to explore these very themes.
Investment opportunities in 2025 are equally compelling, particularly in AI stocks and emerging markets, where resilient tech portfolios are projected to yield strong returns despite macroeconomic headwinds. Wall Street strategists from firms like Goldman Sachs and Morgan Stanley are bullish on AI-driven retail and consumer sectors, citing rebounding demand post-pandemic.
Meanwhile, high-yield bonds in tech infrastructure offer stability, as per JPMorgan analyses, while Bank of America flags emerging markets for their growth potential in digital transformation. For industry veterans, the key is diversification, blending tech equities with bonds to mitigate risks from geopolitical tensions.
According to insights compiled in WebProNews, these opportunities reflect a maturing market where AI not only drives innovation but also stabilizes investment strategies, with forecasts suggesting double-digit gains for well-positioned portfolios through 2025 and beyond.
Shifting focus to specific sectors, the beauty and retail industries are leveraging tech for growth, as evidenced by quarterly deep dives into companies like Estée Lauder and Victoria’s Secret. These firms are navigating consumer shifts through product innovation and digital channels, though margin pressures from tariffs loom large.
In parallel, advanced technology segments in manufacturing, such as those in Nordson Corporation, show robust expansion in medical and electronics, driven by portfolio optimizations. These examples illustrate how tech integration is bolstering resilience across diverse fields.
A detailed examination in TradingView News reveals that for Victoria’s Secret, Q2 2025 revenue beats signal a turnaround, with store traffic and e-commerce innovations countering external challenges, a pattern echoed in broader retail tech adoption trends.
Looking ahead, events like the WSJ Tech Live in Qatar promise to convene leaders for in-depth dialogues on these topics, fostering cross-border collaborations. As
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi