Connect with us

Business

Plan for electricity costs based on region dropped

Published

on


Plans to set people’s electricity bills based on where they live have been dropped by the government.

Energy Secretary Ed Miliband said in April the government was considering zonal pricing, but on Thursday it said it would reform the current national pricing system instead.

Zonal pricing supporters say it can lower bills in areas generating more energy, such as Scotland, though some energy firms say it could have scared off investment.

Energy UK, which represents the industry, welcomed the government’s decision while the Conservatives called Miliband’s promise of lower electricity bills “a fantasy”.

The current electricity pricing system means everyone in the country pays the same flat rate at all times regardless of where they live, but critics argue the price is calculated based on the most expensive electricity generated in the country at any given moment.

Greg Jackson, founder and chief executive of Octopus Energy, told the BBC that zonal pricing works in countries such as Australia, Sweden, and Italy and calculates it could “reduce bills by around £100 a year for most households”.

Supporters also say zonal pricing could have encouraged energy hungry industries to locate closer to the sources of energy, such as Scotland where supply exceeds demand, and away from densely populated cities.

However, energy provider SSE said zonal pricing “would have added risk” to the system, arguing that national pricing creates “a stable and investable environment”.

Firms had warned the government that a major overhaul of the electricity pricing would have deterred bidders for the upcoming auction of renewable projects later this year.

SSE welcomed the “much-needed policy clarity” from the government’s announcement, but Kate Mulvany, principal consultant at Cornwall Insight, said “clarity is not the same as resolution”.

“This move will not solve the deep-rooted issues in Great Britain’s electricity market, and it must not be used as an excuse to continue business as usual,” she added.

The decision to stick with a national pricing comes after a three year consultation. In April, Miliband the BBC that pricing reform was “an an incredibly complex question”.

“There are two options, zonal pricing and reformed national pricing,” he said at the time.

“Whatever route we go down my bottom line is bills have got to fall, and they should fall throughout the country.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

GPTBots.ai’s Business AI Agent Solutions at The MarTech Summit Hong Kong

Published

on



As enterprises worldwide race to adopt AI, GPTBots.ai made its mark at The MarTech Summit Hong Kong, Asia’s premier marketing technology conference attended by world-renowned brands such as JPMorgan, Yahoo, Nike, and DBS, alongside leading Hong Kong enterprises including Cathay Pacific, Hong Kong Disneyland, and The Hong Kong Jockey Club.

With 85% of enterprises prioritizing AI adoption in 2024 (Gartner), yet struggling with implementation gaps, GPTBots.ai demonstrated how its no-code AI Agent platform turns complex AI concepts into deployable solutions—without coding or data science teams.

Spotlight: Real-World AI in Action
At the summit, GPTBots.ai engaged with forward-thinking organizations, including:
A Top Hong Kong University: Their admissions team explored AI-powered chatbots to streamline student inquiries and application processes, aiming to:

  • Automate 80% of FAQs (e.g., program requirements, deadlines).
  • Guide applicants through form-filling with smart error detection.
  • Free staff to focus on students’ in-depth support.

A Leading Hong Kong Airline: Their tech team discussed internal efficiency AI Agents for:

  • AI search to make enterprise knowledge instantly accessible and empower every role.
  • Reducing IT helpdesk tickets by 50% via self-service troubleshooting.

Why GPTBots.ai Stood Out

  • Enterprise-Ready: Built to adapt to your business, no matter the size or complexity.
  • Proven at Scale: Powers AI Agents for financial services, healthcare, and retail giants.
  • End-to-End Capabilities: From strategy to deployment, we manage every step of your AI journey.

“The gap isn’t AI potential—it’s practical adoption,” said Tanya Quan, Marketing Director at GPTBots.ai. “We’re helping enterprises skip the lab and go straight to ROI.”

Previous ArticleNext Article



Source link

Continue Reading

Business

Misinformation was a problem during the Texas floods. AI chatbots weren’t always helping

Published

on


When deadly flash floods hit central Texas last week, people on social media site X turned to artificial intelligence chatbot Grok for answers about whom to blame.

Grok confidently pointed to President Trump.

“Yes. Trump’s 2025 cuts slashed NOAA/NWS funding by ~30% and staff by 17%, impairing forecast accuracy. Warnings underestimated rainfall, contributing to inadequate alerts for the flash floods that killed 24 and left ~24 Camp Mystic girls missing, presumed dead in ongoing searches,” Grok said in a post on Saturday before the death toll rose to at least 120.

Facing backlash from X users that it jumped to conclusions and was “woke,” the chatbot then backtracked.

“That screenshot is a fabrication—I never posted it. Actual facts: Trump’s 2025 NOAA cuts reduced funding ~30% and staff 17%, raising concerns (Axios, Fox). Texas floods killed 43+, with warnings issued hours ahead (CNN), but rainfall underestimated. No evidence directly ties cuts to deaths yet; investigation ongoing. Truth matters,” Grok said in another post.

The contradictory remarks show how AI chatbots can sometimes offer straightforward but inaccurate answers, adding confusion to online chatter already filled with falsehoods and conspiracy theories.

Later in the week, Grok had more problems. The chatbot posted antisemitic remarks and praised Adolf Hitler, prompting xAI to remove the offensive posts. Company owner Elon Musk said on X that the chatbot was “too eager to please and be manipulated,” an issue that would be addressed.

Grok isn’t the only chatbot that has made inappropriate and inaccurate statements. Last year, Google’s chatbot Gemini created images showing people of color in German military uniforms from World War II, which wasn’t common at the time. The search giant paused Gemini’s ability to generate images of people, noting that it resulted in some “inaccuracies.” OpenAI’s ChatGPT has also generated fake court cases, resulting in lawyers getting fined.

The trouble chatbots sometimes have with the truth is a growing concern as more people are using them to find information, ask questions about current events and help debunk misinformation. Roughly 7% of Americans use AI chatbots and interfaces for news each week. That number is higher — around 15% — for people under 25 years old, according to a June report from the Reuters Institute. Grok is available on a mobile app but people can also ask the AI chatbot questions on social media site X, formerly Twitter.

As the popularity of these AI-powered tools increase, misinformation experts say people should be wary about what chatbots say.

“It’s not an arbiter of truth. It’s just a prediction algorithm. For some things like this question about who’s to blame for Texas floods, that’s a complex question and there’s a lot of subjective judgment,” said Darren Linvill, a professor and co-director of the Watt Family Innovation Center Media Forensics Hub at Clemson University.

Republicans and Democrats have debated whether job cuts in the federal government contributed to the tragedy.

Chatbots are retrieving information available online and give answers even if they aren’t correct, he said. If the data they’re trained on are incomplete or biased, the AI model can provide responses that make no sense or are false in what’s known as “hallucinations.”

NewsGuard, which conducts a monthly audit of 11 generative AI tools, found that 40% of the chatbots’ responses in June included false information or a non-response, some in connection with some breaking news such as the Israel-Iran war and the shooting of two lawmakers in Minnesota.

“AI systems can become unintentional amplifiers of false information when reliable data is drowned out by repetition and virality, especially during fast-moving events when false claims spread widely,” the report said.

During the immigration sweeps conducted by the U.S. Immigration and Customs Enforcement in Los Angeles last month, Grok incorrectly fact-checked posts.

After California Gov. Gavin Newsom, politicians and others shared a photo of National Guard members sleeping on the floor of a federal building in Los Angeles, Grok falsely said the images were from Afghanistan in 2021.

The phrasing or timing of a question might yield different answers from various chatbots.

When Grok’s biggest competitor, ChatGPT, was asked a yes or no question about whether Trump’s staffing cuts led to the deaths in the Texas floods on Wednesday, the AI chatbot had a different answer. “no — that claim doesn’t hold up under scrutiny,” ChatGPT responded, citing posts from PolitiFact and the Associated Press.

While all types of AI can hallucinate, some misinformation experts said they are more concerned about Grok, a chatbot created by Musk’s AI company xAI. The chatbot is available on X, where people ask questions about breaking news events.

“Grok is the most disturbing one to me, because so much of its knowledge base was built on tweets,” said Alex Mahadevan, director of MediaWise, Poynter’s digital media literacy project. “And it is controlled and admittedly manipulated by someone who, in the past, has spread misinformation and conspiracy theories.”

In May, Grok started repeating claims of “white genocide” in South Africa, a conspiracy theory that Musk and Trump have amplified. The AI company behind Grok then posted that an “unauthorized modification” was made to the chatbot that directed it to provide a specific response on a political topic.

xAI, which also owns X, didn’t respond to a request for comment. The company released a new version of Grok this week, which Musk said will also be integrated into Tesla vehicles.

Chatbots are usually correct when they fact-check. Grok has debunked false claims about the Texas floods including a conspiracy theory that cloud seeding — a process that involves introducing particles into clouds to increase precipitation — from El Segundo-based company Rainmaker Technology Corp. caused the deadly Texas floods.

Experts say AI chatbots also have the potential to help people reduce people’s beliefs in conspiracy theories, but they might also reinforce what people want to hear.

While people want to save time by reading summaries provided by AI, people should ask chatbots to cite their sources and click on the links they provide to verify the accuracy of their responses, misinformation experts said.

And it’s important for people to not treat chatbots “as some sort of God in the machine, to understand that it’s just a technology like any other,” Linvill said.

“After that, it’s about teaching the next generation a whole new set of media literacy skills.”



Source link

Continue Reading

Business

Why AI alone can’t guarantee business success, expert cautions

Published

on


As companies around the world race to adopt artificial intelligence (AI), strategy expert Shotunde Taiwo urges business leaders to look beyond the hype and focus on aligning technology with clear strategic goals.

Taiwo, a finance and strategy professional, cautions that while AI offers transformative potential, it is not a guaranteed path to success. Without a coherent strategy, organisations risk misdirecting resources, entrenching inefficiencies, and failing to deliver meaningful value from their AI investments.

“AI cannot substitute for strategic clarity,” she explains, stressing the importance of purposeful direction before deploying advanced digital tools. Business leaders, she says, must first define their objectives, only then can AI act as an effective enabler rather than an expensive distraction.

Taiwo stated that many organisations are investing heavily in AI labs, data infrastructure, and talent acquisition without clearly defined business outcomes. This approach, she notes, risks undermining the very efficiencies these technologies are meant to create.

For example, a retail business lacking a distinctive value proposition cannot expect a recommendation engine to deliver meaningful differentiation. Similarly, manufacturers without well-structured pricing strategies will find limited benefit in predictive analytics. “AI amplifies what’s already there,” she adds. “It rewards businesses with strong foundations and exposes those without.”

According to Taiwo, the true value of AI emerges when it is guided by intelligent, strategic intent. High-performing organisations use AI to solve well-defined problems aligned with commercial goals, often framed by business analysts or strategic leaders who understand both operational realities and broader business priorities.

She cites Amazon’s recommendation engine and UPS’s route optimisation algorithms as models of effective AI deployment. In both cases, technology served a clear purpose: boosting customer retention and streamlining logistics, respectively. When guided by strategy, AI becomes a force multiplier, enhancing forecasting, enabling automation, and improving personalisation where workflows are already well-defined.


On the other hand, even the most advanced AI systems falter in the absence of sound strategy. Common pitfalls include deploying machine learning models without a business case, focusing on tools rather than problems, collecting data without a clear use, and optimising narrow metrics at the expense of enterprise-wide goals. These missteps often result in underwhelming pilots and disillusioned stakeholders, issues strategic professionals are well-equipped to navigate and avoid.

In this sense, AI adoption can serve as a strategic diagnostic. Taiwo suggests that when business leaders struggle to define impactful AI use cases, it often reflects deeper ambiguity in their organisational direction. Key questions, such as where value is created, who the primary customer is, or which decisions would benefit most from improved speed or accuracy, are not technical, but fundamentally strategic.

AI, she says, acts as a mirror, revealing strengths and weaknesses in how a business is positioned, differentiated, and aligned across functions. Strategic leaders and business analysts are uniquely positioned to interpret these insights, inform course corrections, and guide effective technology investments.

Looking ahead, Taiwo argues that strategy in the AI era must be data-literate, agile, ethically grounded, and above all, human-centred. Leaders must understand what data they have, and how it can be harnessed, without needing to become technologists themselves.

Organisations must be nimble enough to act on AI-driven insights, whether through supply chain reconfiguration or dynamic pricing. Ethics, too, are critical, especially as AI increasingly impacts areas such as hiring, lending, and content moderation. “AI is not a replacement for strategy – it is a reflection of it,” she said.

In organisations with clarity and discipline, AI can unlock significant value. In those without, it risks adding cost and complexity. The responsibility for today’s leaders is to ensure that technology serves the business, not the other way around.



Source link

Continue Reading

Trending