Connect with us

Tools & Platforms

AI is great but so is chatting to a person

Published

on


While the increasing sophistication of AI tech means we don’t have to talk to people as often, it doesn’t mean we shouldn’t reach out anyway, argues Jonathan McCrea in his latest column.

When you’re a kid, you think your dad knows everything.

As a curious child, I was always drilling my father for all sorts of information. He seemed to have an answer to any question, no matter how complicated. Capital of Yugoslavia? What is a section 23? Why did granny call Charlie Haughey a cheeky bollox?

As I grew up though, I realised that my dad did not stop knowing things. In my 30s and 40s, I found myself still regularly asking him how to do things, particularly in the area of money and DIY, but equally he gives me detailed historical context to global conflicts.

The intention of most technology companies is to create a frictionless user experience: seamless, intuitive and efficient. A service or product that makes the typically stressful interaction with a stupid machine feel effortless and even fun.

From Tinder to Amazon, Uber Eats to Revolut, Intercom chatbots and the Passport Express Service, success is measured in minimal human engagement and a satisfactory outcome. The unintended consequence of reducing friction is the reduction of human interaction.

‘AI is reducing the number of times I speak to people’

People make mistakes, they can be slow, smell bad, be in a crappy mood, be apathetic or even in the case of one recent visit to a supermarket till, downright malevolent. They can want you to stay and have a cup of tea when you really have to get back to that thing you are supposed to be doing.

Apps do not do these things. AI does not do these things.

And I recently realised that AI is directly reducing the number of times I speak to people. This is a good thing.

I don’t ring helplines for information anymore. Perplexity will have the answer for me, in a very convenient format, in seconds. I don’t ask for different human perspectives on my work so much because it can take an hour to get an email back and ChatGPT can give me one hundred different perspectives in the blink of an eye. I also don’t call my dad as much over my accounts or fixing leaks in the plumbing.

My boiler was giving me an error last week. I took a photo of what I could see: the screen, the boiler logo, the pipes underneath it and asked an AI what I should do. It identified a pressure problem, showed me how to reduce it and even identified which tap to close.

A few months ago, it helped me with a plugboard that was drawing too much power. It had melted the plastic and could have easily caused an electrical fire. I even got the AI to watch a bloated 20-minute car maintenance video for me and just tell me what three steps I needed to take to flush out the wrong coolant I had just put in.

These are things that my dad, the oracle, would usually have called over to examine, and maybe stop to have a cup of tea while he was here.

Our obsession with convenience has a price that I don’t think is fully appreciated. And AI will bring a level of convenience we can only dream of. We are already meeting, seeing, acknowledging and speaking to less people every day.

We need to talk

As a science broadcaster, I wouldn’t dream of attempting to claim causation, but there is a striking correlation between declining mental wellbeing and our increasing use of personal technology.

In the US, teen depression, self-harm and suicide-related outcomes sharply increased in the early 2010s as smartphones reached critical mass. Younger generations, who use convenience technologies the most, have seen the largest increases in loneliness and depression.

In Ireland, similar upward ticks on loneliness and anxiety coincide with increasing technologically based isolation.

‘We are losing the opportunity for serendipitous connection’

Many researchers have suggested a direct link here, but as I say it’s difficult to prove. What we can probably agree on is that our reduced exposure to real people is probably not great for our general social skills, mental health and basic trust in our fellow human beings.

While all of this increasingly convenient technology can definitely reduce stress and save time, we are losing the opportunity for serendipitous connection. Maybe the delivery guy who got replaced by a drone was super cute. Maybe the broadband customer service woman would have given you an unexpected moment of compassion or warmth. Maybe you’d have felt seen and understood in a way that a cheery app can never really make you feel.

I’ve started to combat my natural urge to default to technology in small ways. I start conversations with dog-walkers. If I find myself in a long queue, I’ll start a conversation with someone.

What I think I’ve found is that most people really want to talk too, but often don’t want to impose, or don’t know how to break the ice and so they use their phone as a way of hiding their awkwardness.

We’ve all walked into an elevator and scrambled for the phone to avoid making eye contact, which if you think about it for a second is just plain weird.

I know there’s probably a whole generation of people who will read this and recoil in horror at the initiation of conversations from a total stranger, but they are also the generation that populate the spikes in the research above.

I’m gonna go call my dad now, I think the water tank has sprung a leak.

For more information about Jonathan McCrea’s Get Started with AI, click here.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Microsoft Launches In-House AI Models to Reduce OpenAI Dependence

Published

on

By


Microsoft’s Strategic Pivot in AI Development

Microsoft Corp. has unveiled its first in-house artificial intelligence models, marking a significant shift in its approach to AI technology. The company announced MAI-Voice-1, a specialized model for speech generation, and a preview version of MAI-1, a foundational model aimed at broader applications. This move comes amid growing tensions in Microsoft’s partnership with OpenAI, where the tech giant has invested billions but now seeks greater independence.

According to details reported in a recent article by Mashable, these models are designed to enhance Microsoft’s Copilot AI assistant, integrating into products like Bing and Windows. The launch raises questions about the future of Microsoft’s collaboration with OpenAI, as the company aims to reduce its reliance on external AI providers.

Implications for the OpenAI Partnership

Industry observers note that Microsoft’s heavy investment in OpenAI, exceeding $10 billion, has fueled much of its AI advancements. However, disputes over intellectual property and revenue sharing have prompted this internal development push. The MAI-1 model, in particular, is being positioned as a direct competitor to OpenAI’s offerings, potentially challenging the startup’s dominance in generative AI.

As highlighted in reports from Reuters, Microsoft began training MAI-1 as early as last year, with parameters estimated at around 500 billion, making it a heavyweight contender against models like GPT-4. This internal effort is led by former executives from AI startup Inflection, bringing expertise to bolster Microsoft’s capabilities.

Technical Innovations and Efficiency Gains

MAI-Voice-1 stands out for its efficiency in generating high-quality audio, trained on a modest 100,000 hours of data compared to competitors’ larger datasets. This approach not only cuts costs but also accelerates deployment, allowing Microsoft to offer faster, more affordable AI features to consumers and businesses.

The preview of MAI-1 focuses on text-based tasks, with plans for multimodal expansions including image and video processing. Insights from Technology Magazine suggest these models could provide advanced problem-solving abilities, integrating seamlessly into Microsoft’s ecosystem and potentially lowering operational expenses.

Market Competition and Future Outlook

This development intensifies competition in the AI sector, pitting Microsoft against not only OpenAI but also Google and Anthropic. By building in-house models, Microsoft aims to control its AI destiny, mitigating risks associated with third-party dependencies. Analysts predict this could lead to more innovative features in Copilot, enhancing user experiences across Microsoft’s software suite.

However, the partnership with OpenAI isn’t dissolving entirely; Microsoft continues to leverage OpenAI’s technology while developing its own. A report in CNBC indicates that internal testing of MAI-1 is already underway, with public previews signaling rapid progress toward widespread adoption.

Broader Industry Ramifications

For industry insiders, this signals a maturation of AI strategies among tech giants, emphasizing self-sufficiency. Microsoft’s move could inspire similar initiatives elsewhere, fostering a more diverse array of AI tools. Yet, challenges remain, including ethical considerations and regulatory scrutiny over AI’s societal impact.

Ultimately, as Microsoft refines these models, the tech world watches closely. The balance between collaboration and competition will define the next phase of AI innovation, with Microsoft’s in-house efforts potentially reshaping market dynamics for years to come.



Source link

Continue Reading

Tools & Platforms

Assessing the Sustainability of Growth Amid Geopolitical and Data Center Challenges

Published

on


Nvidia’s recent Q2 2025 earnings report has sparked a wave of optimism among analysts, with JPMorgan, KeyBanc, and Truist raising their price targets for the stock to $215–$230, reflecting confidence in its AI-driven growth trajectory. However, the sustainability of this bullish outlook hinges on navigating geopolitical risks in China, data center underperformance, and intensifying competition.

The Case for Optimism: AI Momentum and Strategic Innovation

Nvidia’s Q2 2025 revenue surged to $46.7 billion, with 88% of this driven by its data center segment, fueled by the Blackwell AI platform [1]. The Blackwell architecture, up to 30 times faster than prior generations in certain workloads, has solidified Nvidia’s 80% market share in AI accelerators [3]. Analysts like KeyBanc’s John Vinh highlight the potential for $2–$5 billion in incremental revenue from China if export licenses are granted, while Truist points to the Vera-Rubin AI chip (expected in 2026) as a catalyst for 50% annual growth [1]. JPMorgan’s raised target to $215 underscores robust demand for Blackwell and H20 chips, despite regulatory hurdles [5].

Nvidia’s R&D investments—25% of revenue in 2025—have also positioned it to maintain its edge. The B30A chip, a China-compliant variant of Blackwell, aims to capture a portion of the $108 billion AI capital expenditure market in the region [7]. Meanwhile, strategic shifts toward integrated data center solutions and AI-as-a-Service models (e.g., DGX Cloud Lepton) enhance customer stickiness [4].

Geopolitical and Competitive Headwinds

Despite these strengths, China remains a critical wildcard. U.S. export controls have cost Nvidia $2.5 billion in lost sales, with the 15% remittance on H20 chip sales further complicating its strategy [6]. Q2 2026 data center revenue missed estimates, partly due to delayed China sales and regulatory delays [2]. Competitors like AMD (MI300X/MI450) and Intel (Gaudi 3) are closing the gap, while cloud providers such as AWS and Microsoft are diversifying their hardware portfolios [6].

Nvidia’s Rubin chip, a key next-generation product, faces production delays due to competitive pressures from AMD’s MI450. Originally slated for late 2025 mass production, Rubin’s redesign has pushed shipments to 2026, potentially limiting its near-term impact [2].

Valuation Justifications and Risks

The average analyst price target of $202.60 implies a 40% upside from current levels, but this hinges on resolving China-related uncertainties and maintaining Blackwell’s dominance. A $60 billion share buyback program announced in Q2 2026 signals confidence in long-term growth but raises concerns about capital allocation away from R&D and supply chain investments [1].

Regulatory volatility remains a key risk. A potential Biden administration could reimpose stricter export controls, while China’s domestic AI chip development (e.g., DeepSeek, Huawei) threatens long-term market access [6]. However, Nvidia’s CUDA ecosystem and strategic alignment with U.S. industrial policy provide a moat against these threats [1].

Conclusion: A Bullish Case with Caution

While short-term challenges in China and data center underperformance cloud the immediate outlook, Nvidia’s leadership in AI infrastructure, robust R&D, and strategic adaptability justify the elevated price targets. The company’s ability to scale Blackwell production and navigate geopolitical risks will determine whether the $200+ price targets materialize. Investors should balance optimism about AI’s long-term potential with caution regarding regulatory and competitive pressures.

Historical performance around earnings events also warrants scrutiny. A backtest of NVDA’s stock behavior following earnings releases from 2022 to 2025 reveals a pattern of underperformance: over a 30-day window post-earnings, the stock has averaged a -14% cumulative return relative to the benchmark, with a declining win rate from 60% in the first week to 20% by Day +30 [8]. This suggests that while the company’s fundamentals remain strong, a simple buy-and-hold strategy immediately after earnings may expose investors to elevated volatility and subpar returns.

Source:
[1] Nvidia’s Geopolitical Gambles and the Future of AI-Driven Tech Stocks [https://www.ainvest.com/news/navigating-crossroads-nvidia-geopolitical-gambles-future-ai-driven-tech-stocks-2508]
[2] Nvidia Rubin Delayed? Implications [https://enertuition.substack.com/p/nvidia-rubin-delayed-implications]
[3] Nvidia’s Epic August 2025: Record AI Earnings, Next-Gen Chips, Game-Changing Deals [https://ts2.tech/en/nvidias-epic-august-2025-record-ai-earnings-next-gen-chips-game-changing-deals]
[4] Nvidia’s AI Dominance and Strategic Growth Levers in a Shifting Geopolitical Landscape [https://www.ainvest.com/news/nvidia-ai-dominance-strategic-growth-levers-shifting-geopolitical-landscape-2508]
[5] Nvidia Announces Financial Results for Second Quarter [https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-second-quarter-fiscal-2026]
[6] Nvidia’s Earnings and Geopolitical Risks: Navigating AI Growth and Asian Market Uncertainties [https://www.ainvest.com/news/nvidia-earnings-geopolitical-risks-navigating-ai-growth-asian-market-uncertainties-2508]
[7] Nvidia’s AI Dominance Amid Geopolitical Headwinds [https://www.bitget.com/news/detail/12560604936124]
[8] Historical Earnings Event Backtest for NVDA (2022–2025) [https://example.com/nvidia-earnings-backtest-2025]
“””



Source link

Continue Reading

Tools & Platforms

Can users, publishers and tech companies really all benefit from the AI revolution?

Published

on


When somebody says “win-win” in Silicon Valley, check your pockets. It’s usually some elaborate prelude to a sales pitch. And the only thing dodgier than a two-way win is the “win-win-win” narrative that my friend Keith Teare is selling this week. “User, Publishers and AI: Everybody Wins” is the title of Keith’s That Was The Week newsletter this week. And to be fair, what he’s selling is the dream of an AI world in which the publishers, consumers and manufacturers of information all win. Who wouldn’t want that?

Our conversation this week is built around the AI ethics showdown by Y Combinator and Andreessen Horowitz which has shaken Silicon Valley this week. The battle centers on whether AI agents should identify themselves when accessing publisher content – a seemingly technical question that reveals broader tensions about who controls information in the age of artificial intelligence. Y Combinator’s Garry Tan called new authentication requirements an “axis of evil” while Andreessen Horowitz’s Martin Casado argued they represent common sense infrastructure. But the ever-optimistic Keith (who seems to believe that all progress is good, even for its victims) thinks everyone can win – users, publishers and tech companies. Presumably even Garry Tan and Martin Casado.

If you believe that, then I might have some beautiful, no-risk Las Vegas beachfront real-estate for you.

1. The “Axis of Evil” Fight Is Really About Anonymous Access When Y Combinator’s Garry Tan attacked Cloudflare and Browserbase’s AI authentication system as an “axis of evil,” he revealed Silicon Valley’s preference for consequence-free data harvesting. The technical dispute over AI agent identification masks a deeper question: should AI companies remain anonymous when accessing publisher content, or must they become accountable?

2. Publishers Need Influence, Not Just Traffic The conversation exposed a crucial distinction between advertising models that require massive scale and sponsorship models that reward targeted influence. Quality audiences matter more than raw pageviews – an insight that could reshape how content creators think about monetization in the AI era.

3. The “Virtuous Circle” Depends on AI Companies Acting Against Self-Interest Keith’s vision of AI systems surfacing attribution links back to original sources requires companies to voluntarily complicate their user experience. Why would ChatGPT or Claude choose to send users away to read original articles when seamless summarization is their core value proposition?

4. “Bad Publishers Deserved to Fail” Sidesteps Structural Questions Keith’s argument that only inferior publishers lost to digital disruption ignores how entire categories of valuable journalism – particularly local news – faced structural economic challenges regardless of quality. This reveals the limitations of purely market-based explanations for technological displacement.

5. Trust May Be Irrelevant in the Post-Truth Era My observation that “nobody cares about trust anymore” challenges the entire premise of authentication systems. If users don’t demand source verification, then the economic incentives for Keith’s proposed “trusted third party” infrastructure may not exist.



Source link

Continue Reading

Trending