Connect with us

AI Research

Stargate Dials Back Near-Term Goal Amid Disagreement

Published

on


The Stargate artificial intelligence (AI) infrastructure project reportedly got off to a slow start.

Six months after project was announced, the newly formed company operating the effort has not made a deal to build a data center and has shifted its goal from investing $100 billion immediately to building one data center by the end of 2025, the Wall Street Journal (WSJ) reported Monday (July 21).

The slow start was caused in part by disagreements between Stargate’s two joint leaders — SoftBank and OpenAI — over where to build data centers, according to the report.

Asked about the report by the WSJ, the companies said in a joint statement that they were making progress in several states and moving quickly to deliver AI infrastructure.

In the meantime, OpenAI has struck two data center deals — one with Oracle and one with CoreWeave — that do not include SoftBank and that together will give OpenAI access to as much computing power as Stargate said in January that it would deliver, according to the report.

When Stargate was announced in January, it was described as an up-to-$500 billion project that aimed to build big AI-focused data centers in the U.S. The first 10 were to be constructed in Texas, with the total later to be expanded to 20.

It was reported in May that Oracle, one of the equity partners in Stargate, would buy $40 billion worth of Nvidia’s chips to power the first Stargate project, a new data center in Abilene, Texas, and that the facility was expected to be fully operational by the middle of 2026.

AI data centers are needed because traditional data centers and power grids are struggling to accommodate the computational power, data storage and energy required by AI, PYMNTS reported in January.

In another, separate effort, Meta CEO Mark Zuckerberg said July 14 that his company will invest “hundreds of billions of dollars” into AI infrastructure for its superintelligence effort.

On July 9, when Nvidia became the world’s first $4 trillion public company, it was reported that the chip designer and manufacturer was enjoying a continuing boom in demand for AI technologies.



Source link

AI Research

Lessons from 60 Years in Journalism — A Postscript on Artificial Intelligence – TAPinto

Published

on



Lessons from 60 Years in Journalism — A Postscript on Artificial Intelligence  TAPinto



Source link

Continue Reading

AI Research

Building a responsible AI future in Saudi Arabia

Published

on


As the world transitions from a digital age to the era of artificial intelligence (AI), key economies in the Middle East are accelerating their AI adoption plans as part of their broader economic diversification efforts. Among the GCC countries, Saudi Arabia has made bold moves to position itself as a global AI leader, writes Oliver Sykes, partner at PwC.

At the World Economic Forum (WEF) this year, Saudi Arabia reinforced its commitment to shaping global AI discourse, highlighting efforts at strengthening the digital economy and fostering innovation.

According to a recently launched report highlighting Saudi Arabia’s advancements in deep technology, 50 percent of the kingdom’s deep tech startups are already focused on developing AI and the Internet of Things (IoT), while the nation aims for AI to contribute 12 percent of its gross domestic product by 2030. Last year, the kingdom took 14th place globally and the top spot in the Arab world in the Global AI Index for 2024.

Saudi Arabia’s AI strategy includes major investments, such as a $40 billion fund to boost AI as it continues to position the kingdom as a global AI hub, with opportunities for chip makers and large-scale data centers. The country is also forging global partnerships to enhance Arabic AI models.

A catalyst for economic growth

AI is expected to be a catalyst for economic growth across multiple sectors in Saudi Arabia. Some of the most promising applications include integrating the technology into healthcare for early disease diagnosis, predictive care and pandemic prevention. AI is also being used for ride-sharing and launching autonomous vehicles, as well as for personalized financial planning, fraud detection and anti-money laundering in finance.

In energy, AI optimizes usage through smart grids, real-time monitoring and renewable integration, and it is also being harnessed to drive sustainability efforts, including carbon footprint tracking, climate change mitigation, and optimizing resource allocation in agriculture and water management.

While this potential of AI to drive value across industries is undeniable, it is equally crucial for organizations to establish robust governance frameworks surrounding data privacy, data management, and around AI itself. Without appropriate governance, the very technologies that promise to enhance efficiency and decision-making can lead to critical pitfalls that undermine trust, compliance and ethical standards.

Since AI systems rely on vast amounts of personal and sensitive data, they pose significant data privacy, governance, and ethical risks if not properly managed. Unauthorized access, data breaches and lack of consent can lead to regulatory non-compliance, while poor data governance practices – including inaccurate data, unclear ownership, and bias – can compromise AI outcomes. Organizations must, therefore, prioritize data quality, integrity, and compliance to prevent biased or flawed AI outcomes.

As regulatory landscapes evolve, strong governance frameworks help businesses stay compliant while maintaining trust and transparency. Additionally, AI governance requires ethical guidelines and accountability measures to mitigate risks related to bias, decision-making, and societal impact.

This is especially critical as 90 percent of business leaders in the GCC expect AI to enhance business processes and workflows, while 81 percent anticipate its use in new product and service development over the next three years, as indicated in PwC’s latest CEO Survey. Therefore, it is all the more crucial for businesses to be prepared for these disruptive technologies with strong governance frameworks in place.

Building a responsible AI future in Saudi Arabia

The need to ensure ethical and transparent AI

AI governance involves establishing ethical guidelines, ensuring transparency, and managing risks associated with AI deployment. Eight key solutions for AI governance:

  1. Ethical guidelines
    Developing ethical guidelines for AI development and deployment is essential to ensure fairness, transparency and accountability. The organization’s responsible AI procedures ought to be built upon these criteria.
  2. Risk management
    Organizations can detect and reduce possible ethical, reputational and technological risks related to AI by putting strong risk assessment procedures in place. This proactive strategy is essential for protecting the integrity and interests of the company.
  3. Regulatory compliance
    Keeping up with industry regulations is crucial to ensure AI systems comply with applicable laws and standards. Regularly reviewing compliance requirements helps organizations avoid legal issues and maintain their reputation.
  4. Cross-functional collaboration
    All viewpoints are considered when stakeholders from other departments, including legal, IT, and HR, are included in governance conversations. This cooperative strategy promotes a thorough comprehension of the ramifications of AI technologies.
  5. Model validation and monitoring
    Organizations can implement validation and monitoring processes for AI models by establishing criteria for assessing model performance, including accuracy, fairness, and compliance with ethical standards.
  6. Continuous monitoring
    It is essential to set up monitoring methods to assess AI performance to make sure that these systems function as planned and continue to be consistent with organizational values. Organizations can spot any discrepancies early on with the use of continuous assessment.
  7. Feedback loops
    Establishing avenues for user and stakeholder feedback is critical to the continuous improvement of AI systems. Organizations can modify their AI solutions to better meet consumer expectations and needs by actively soliciting feedback.
  8. Certification
    Organizations can consider internationally acclaimed standards like the ISO 42001 for AI systems to ensure AI systems are deployed in a responsible and ethical manner.

Looking ahead: A call for AI governance

Despite the widespread use of AI technologies, the challenge now is not just about adopting AI but about leading AI responsibly. As AI adoption accelerates, businesses must be future-ready, embedding trust and ethical considerations into their AI strategies. Established in 2019, the Saudi Data & AI Authority (SDAIA) has played a pivotal role in shaping AI regulations and ethical frameworks in Saudi Arabia.

Recently, the country has ranked third globally in the Organization for Economic Co-operation and Development’s (OECD) AI Policy Observatory, behind the US and the UK, reflecting its strong commitment to AI regulation and ethical governance.

AI’s success hinges on trust. Without governance, transparency and accountability, AI’s potential could be overshadowed by risks that undermine its credibility. With its ambitious investments, regulatory foresight and a commitment to ethical AI, Saudi Arabia has the potential to set global benchmarks for AI adoption.



Source link

Continue Reading

AI Research

Tories pledge to get ‘all our oil and gas out of the North Sea’

Published

on


Conservative leader Kemi Badenoch has said her party will remove all net zero requirements on oil and gas companies drilling in the North Sea if elected.

Badenoch is to formally announce the plan to focus solely on “maximising extraction” and to get “all our oil and gas out of the North Sea” in a speech in Aberdeen on Tuesday.

Reform UK has said it wants more fossil fuels extracted from the North Sea.

The Labour government has committed to banning new exploration licences. A spokesperson said a “fair and orderly transition” away from oil and gas would “drive growth”.

Exploring new fields would “not take a penny off bills” or improve energy security and would “only accelerate the worsening climate crisis”, the government spokesperson warned.

Badenoch signalled a significant change in Conservative climate policy when she announced earlier this year that reaching net zero would be “impossible” by 2050.

Successive UK governments have pledged to reach the target by 2050 and it was written into law by Theresa May in 2019. It means the UK must cut carbon emissions until it removes as much as it produces, in line with the 2015 Paris Climate Agreement.

Now Badenoch has said that requirements to work towards net zero are a burden on oil and gas producers in the North Sea which are damaging the economy and which she would remove.

The Tory leader said a Conservative government would scrap the need to reduce emissions or to work on technologies such as carbon storage.

Badenoch said it was “absurd” the UK was leaving “vital resources untapped” while “neighbours like Norway extracted them from the same sea bed”.

In 2023, then Prime Minister Rishi Sunak granted 100 new licences to drill in the North Sea which he said at the time was “entirely consistent” with net zero commitments.

Reform UK has said it will abolish the push for net zero if elected.

The current government said it had made the “biggest ever investment in offshore wind and three first of a kind carbon capture and storage clusters”.

Carbon capture and storage facilities aim to prevent carbon dioxide (CO2) produced from industrial processes and power stations from being released into the atmosphere.

Most of the CO2 produced is captured, transported and then stored deep underground.

It is seen by the likes of the International Energy Agency and the Climate Change Committee as a key element in meeting targets to cut the greenhouse gases driving dangerous climate change.



Source link

Continue Reading

Trending