Connect with us

Business

Utah’s strategic investments in AI infrastructure – Utah Business

Published

on


This story appears in the July 2025 issue of Utah Business. Subscribe.

Artificial intelligence has rapidly gone from niche technology to mainstream necessity. The explosive popularity of tools like ChatGPT and massive investments from companies such as Amazon, Google and Microsoft have made AI ubiquitous. But beneath the surface of AI’s user-friendly interfaces and impressive capabilities lies an urgent need for scalable infrastructure to support its growth.

AI’s infrastructure demands are massive due to the immense computing power required to process complex algorithms and handle vast data sets. Troy Rydman, who currently serves as Chief Information Officer and Chief Information Security Officer at Utah-based Packsize, is deeply familiar with the infrastructure needs of the industry. Before his role at Packsize, Rydman led strategic security initiatives at Amazon Web Services [AWS], helping some of AWS’s largest clients safely integrate advanced technologies, including cloud computing and AI.

“AI’s growth doesn’t just require better software,” Rydman says. “It needs energy-efficient processors, stable power grids and secure, scalable data centers to handle the load.”

Utah is already making substantial investments to meet this demand. Novva Data Centers, a Utah-based technology company, recently secured a $2 billion investment to complete its massive “supercluster” campus in West Jordan. Similarly, Meta continues to expand its data center operations in Eagle Mountain. To ensure reliable, high-speed internet, Comcast is also investing $138 million into fiber expansions across the state.

Rydman points out that Utah’s existing infrastructure may not yet be up to this massive task.

“A lot of these changes are going to require us to push a lot more power to critical infrastructure and data centers,” he says. “I don’t think our power grids are up to those specifications.”

All of this new infrastructure requires enormous amounts of energy. For example, one data center campus in Eagle Mountain alone would need 1.4 gigawatts, more than Wyoming’s entire current usage. Recognizing this urgency, Utah recently introduced Operation Gigawatt, an ambitious initiative aiming to double statewide energy production over the next decade with an emphasis on clean energy sources such as geothermal and next-generation nuclear.

Across the country, major tech corporations like Amazon and Microsoft are directly investing in generating power themselves. Amazon is allocating over $500 million to develop small modular nuclear reactors in Washington, Virginia and Pennsylvania, aiming to supply clean energy to its data centers. Similarly, Microsoft has entered a 20-year agreement to purchase power from a facility at Three Mile Island to ensure a steady energy supply for its AI-driven operations.

Still, Utah’s infrastructure expansion brings substantial economic promise, and Rydman sees unique opportunities for local entrepreneurs, especially in niche AI markets.

“There might be an opportunity to create pre-computed models for smaller use-cases, saving on time and energy costs,” he says. “We seem to have the right skill sets coming out of school, and Utah’s entrepreneurial vibe is perfect for this.”

Ultimately, Rydman believes generative AI will become as routine as streaming services. As Utah positions itself for AI’s accelerating future, the state’s proactive approach could become a blueprint for balanced growth, fostering innovation while safeguarding essential resources and infrastructure.



Source link

Business

New AI Privacy Guidance Makes Compliance Easier for Businesses

Published

on


The Office of the Australian Information Commissioner (OAIC) has just published two new guides that explain how Australia’s existing privacy laws apply to AI, and what businesses need to do to stay on the right side of them.

Let’s break it down in plain English so it’s easy to understand—even if you’re still getting your head around AI!

What Was the Old Rule?

Before now, businesses have faced uncertainty about how privacy laws apply to AI tools—especially commercially available generative AI products that use personal information to train their models.

This created confusion about what steps to take to comply and how to select AI products that respect privacy.

There wasn’t clear guidance from regulators on how to balance innovation with privacy risks. Many organisations were left guessing if their AI usage was lawful or exposing them to privacy breaches.

What’s Changed?

The OAIC has stepped in with two new guides:

  • Guide for Businesses Using AI Products: This helps businesses understand their privacy obligations when using AI tools, and offers practical tips on choosing AI products that meet privacy standards.
  • Guide for AI Developers: This focuses on developers using personal information to train generative AI models, clarifying how privacy laws apply in that context.

These guides clearly articulate the OAIC’s expectations and outline what good privacy governance looks like when it comes to AI.

What Does This Mean for Your Business?

The key takeaway is that AI products shouldn’t be used just because they’re available.

Businesses must:

  • Take a cautious approach, carefully assessing privacy risks
  • Ensure robust privacy safeguards are in place
  • Be transparent with customers about how their personal information is used in AI
  • Verify that any AI-generated outputs comply with privacy laws

If you’re planning to use AI or already do, these guides give you a clear path to follow—and the OAIC is serious about enforcing compliance.

What Should You Do Now?

Here’s a quick checklist to help you stay compliant:

  • Review your current or planned use of AI tools. Are you aware of what personal information they collect or process?
  • Read the OAIC’s new guides to understand your obligations and best practices.
  • Work with your legal or privacy team to put privacy governance measures in place—like risk assessments and data minimisation.
  • Train your staff on privacy risks related to AI and how to handle data responsibly.
  • Stay informed about upcoming privacy reforms, including potential new obligations on fair and reasonable use of personal information.

Key Takeaways

  • Existing privacy laws apply fully to AI—there’s no special exemption just because it’s a new technology.
  • The OAIC’s new guides clarify how those laws work with AI tools and development.
  • Businesses must assess privacy risks and build safeguards before using AI.
  • Transparency and accountability are essential to build trust and avoid penalties.

Frequently Asked Questions (FAQ)

1. Do privacy laws apply to all AI tools?
Yes. Australian privacy laws apply to any AI tool that collects, uses, or shares personal information. There are no special exceptions just because it’s AI.

2. What are the main privacy risks with AI?
Risks include accidental data leaks, using personal info without permission, AI generating incorrect or misleading results, and not being clear with customers about how their data is used.

3. How can my business comply with the new guidance?
Start by reading the OAIC’s guides, do a privacy risk check on your AI tools, protect personal data with strong security, train your staff on privacy best practices, and be transparent with your customers about AI use.

4. What happens if a business breaks the privacy rules?
The OAIC can investigate and take enforcement action, including fines. Breaking privacy rules can also harm your reputation and customer trust.



Source link

Continue Reading

Business

The Caribbean island making millions from the AI boom

Published

on


Jacob EvansBBC World Service

Getty Images A beach in AnguillaGetty Images

Anguilla is a British Overseas Territory renowned for its pristine beaches

Back in the 1980s when the internet was still in its infancy, countries were being handed their own unique website addresses to navigate this nascent new online world. Such as .us for the US or .uk for the UK.

Eventually, almost every country and territory had a domain based on either its English or own language name. This included the small Caribbean island of Anguilla, which landed the address .ai.

Unbeknownst to Anguilla at the time, this would become a future jackpot.

With the continuing boom in artificial intelligence (AI), more and more companies and individuals are paying Anguilla, a British Overseas Territory, to register new websites with the .ai tag.

Such as US tech boss Dharmesh Shah, who earlier this year spent a reported $700,000 (£519,000) on the address you.ai.

Speaking to the BBC, Mr Shah says he purchased it because he had “an idea for an AI product that would allow people to create digital versions of themselves that could do specific tasks on their behalf”.

The number of .ai websites has increased more than 10-fold in the past five years, and has doubled in the past 12 months alone, according to a website that tracks domain name registrations.

The challenge for Anguilla, which has a population of just 16,000 people, is how to harness this lucrative bit of luck and turn it into a long-term and sustainable source of income.

Similar to other small Caribbean islands, Anguilla’s economy is built on a bedrock of tourism. Recently, it’s been attracting visitors in the luxury travel market, particularly from the US.

Anguilla’s statistics department says there was a record number of visitors to the island last year, with 111,639 people entering its shores.

Yet Anguilla’s tourism sector is vulnerable to damage from hurricanes every autumn. Situated in the northeast of the Caribbean island arc, Anguilla lies perfectly within the North Atlantic hurricane belt.

So gaining an increasing income from selling website addresses is playing an important role in diversifying the island’s economy, and making it more resilient to the financial damage that storms may bring. This is something that the International Monetary Fund (IMF) noted in a recent report on Anguilla.

HubSpot A picture of Dharmesh Shah stood at a slight angle smiling at the camera. He's Indian, in his 50's and wearing a blue t-shirt with black hair and a grey beard.HubSpot

Dharmesh Shah is said to have spent $700,000 on the domain you.ai

In its draft 2025 budget document, the Anguillian government says that in 2024 it earned 105.5m East Caribbean dollars ($39m; £29m) from selling domain names. That was almost a quarter (23%) of its total revenues last year. Tourism accounts for some 37%, according to the IMF.

The Anguillian government expects its .ai revenues to increase further to 132m Eastern Caribbean dollars this year, and to 138m in 2026. It comes as more than 850,000 .ai domains are now in existence, up from fewer than 50,000 in 2020.

As a British Overseas Territory, Anguilla is under the sovereignty of the UK, but with a high level of internal self-governance.

The UK has significant influence on the island’s defence and security, and has provided financial assistance during times of crisis. After Hurricane Irma severely damaged it in 2017, the US gave £60m to Anguilla over five years to help meet the repair bill.

The UK’s Foreign, Commonwealth and Development Office tells the BBC it welcomes Anguilla’s efforts “to find innovative ways to deliver economic growth” as it helps “contribute to Anguilla’s financial self-sufficiency”.

A map showing Anguilla's location in the Caribbean

To manage its burgeoning domain name income, in October 2024 Anguilla signed a five-year deal with a US tech firm called Identity Digital, which specialises in internet domain name registries.

At the start of this year, Identity Digital announced that it had moved where all the .ai domains are hosted, from servers in Anguilla, to its own global server network. This is to prevent any disruption from future hurricanes, or any other risks to the island’s infrastructure, such as power cuts.

The exact cost of .ai addresses isn’t publicly disclosed, but registration prices are said to start from roughly $150 to $200. With renewal fees of around the same amount every two years.

At the same time, more in-demand domain names are auctioned off, with some fetching hundreds of thousands of US dollars. The owners of these then have to pay the same small renewal fees as everyone else.

In all cases, the government of Anguilla gets the sales revenue, with Identity Digital getting a cut said to be around 10%. However, they appear to be sensitive about the topic, as both declined to be interviewed for this article.

Currently the most expensive .ai domain name purchase is Mr Shah’s you.ai.

A self-confessed AI-enthusiast, and co-founder of US software company Hubspot, Mr Shah has several other .ai domain addresses to his name, but the flagship you.ai is not yet operational as he’s been busy with other projects.

Mr Shah says he buys domain names for himself, but will occasionally look to sell “if I don’t have immediate plans for it, and there’s another entrepreneur that wants to do something with the name”.

Mr Shah believes that another person or company will soon set a new record for the highest price of an .ai domain purchase, such is the continuing excitement around AI.

But he adds: “Having said that, I still think over the long-term, .com domains will maintain their value better and for longer.”

In recent weeks, .ai auctions have seen major six-figure sales. In July, cloud.ai sold for a reported $600,000 and law.ai sold for $350,000 earlier this month.

Getty Images A satellite image of three hurricanes hitting the Caribbean at the same time in September 2017. From left to right - Katia, Irma, and JoseGetty Images

The Caribbean was hit by three hurricanes at the same time in September 2017 – Katia, Irma and Jose

However, Anguilla’s position is not without precedent. The similarly tiny Pacific island nation of Tuvalu signed an exclusive deal in 1998 to license its .tv domain name.

Reports say this granted exclusive rights to US domain name registry firm, VeriSign in exchange for $2m a year, which later rose to $5m.

A decade later and with the internet expanding exponentially, Tuvalu’s finance minister, Lotoala Metia, said VeriSign, paid “peanuts” for the right to run the domain name. The country signed a new deal with a different domain provider, GoDaddy, in 2021.

Anguilla is operating in a different fashion, having handed over management of the domain name in a revenue-sharing model, not a fixed payment.

Cashing in on this new line of income sustainably has been a major goal for the island. It’s hoped the increasing incomings will allow for a new airport to be built to facilitate tourism growth, as well as fund improvements to public infrastructure and access to health care.

As the number of registered .ai domains hurtles toward the million mark, Anguillians will hope this money is managed safely and invested in their future.



Source link

Continue Reading

Business

Why our business is going AI-in-the-loop instead of human-in-the-loop

Published

on


True story: I had to threaten Replit AI’s brain that I would report it’s clever but dumb suggestions to the AI police for lying.

I also told ChatGPT image creation department how deeply disappointed I was that it could not, after 24 hrs of iterations, render the same high-quality image twice without changing an item on the image or misspelling. All learnings and part of the journey.

We need to remain flexible and open to new tools and approaches, and simultaneously be laser focused. It’s a contradiction, but once you start down this road, you will understand. Experimentation is a must. But it’s also important to ignore the noise and constant hype and CAPS.

How our business’ tech stack evolves

A few years ago, we started with ChatGPT and a few spreadsheets. Today, our technology arsenal spans fifteen AI platforms, from Claude and Perplexity to specialised tools like RollHQ for project management and Synthesia for AI video materials. Yet the most important lesson we’ve learned isn’t about the technology itself. It’s about the critical space between human judgment and machine capability.

The data tells a compelling story about where business stands today: McKinsey reports that 72 percent of organizations have adopted AI for at least one business function, yet only one percent believe they’ve reached maturity in their implementation. Meanwhile, 90 percent of professionals using AI report working faster, with 80 percent saying it improves their work quality.

This gap between widespread adoption and true excellence defines the challenge facing every service organisation today, including our own.

Our journey began like many others, experimenting with generative AI for document drafting and research. We quickly discovered that quality was low and simply adding tools wasn’t enough. What mattered was creating a framework that put human expertise at the center while leveraging AI’s processing power. This led us to develop what we call our “human creating the loop” approach, an evolution beyond the traditional human-in-the-loop model. It has become more about AI-in-the-loop for us than the other way round.

The distinction matters.

Human-in-the-loop suggests people checking machine outputs. Human creating the loop means professionals actively designing how AI integrates into workflows, setting boundaries, and maintaining creative control. Every client deliverable, every strategic recommendation, every customer interaction flows through experienced consultants who understand context, nuance, and the subtleties that define quality service delivery.

Our evolving tech stack

Our technology portfolio has grown strategically, with each tool selected for specific capabilities.

Each undergoes regular evaluation against key metrics, with fact-checking accuracy being paramount. We’ve found that combining multiple tools for fact checking and verification, especially Perplexity’s cited sources with Claude’s analytical capabilities, dramatically improves reliability.

The professional services landscape particularly demonstrates why human judgment remains irreplaceable. AI can analyse patterns, generate reports, and flag potential issues instantly. But understanding whether a client concern requires immediate attention or strategic patience, whether to propose bold changes or incremental improvements; these decisions require wisdom that comes from experience, not algorithms.

That’s also leaving aside the constant habit of AI generalising, making things up and often blatantly lying.

For organisations beginning their AI journey, start with clear boundaries rather than broad adoption.

Investment in training will be crucial.

Research shows that 70 percent of AI implementation obstacles are people and process-related, not technical. Create internal champions who understand both the technology and your industry’s unique requirements.

Document what works and what doesn’t. Share learnings across teams. Address resistance directly by demonstrating how AI enhances rather than replaces human expertise.

The data supports this approach. Organisations with high AI-maturity report three times higher return on investment than those just beginning. But maturity doesn’t mean maximum automation. It means thoughtful integration that amplifies human capabilities.

Looking ahead, organisations that thrive will be those that view AI as an opportunity to elevate human creativity rather than replace it.

Alexander PR’s AI policy framework

Our approach to AI centres on human-led service delivery, as outlined in our core policy pillars:

  1. Oversight: Human-Led PR

We use AI selectively to improve efficiency, accuracy, and impact. Every output is reviewed, adjusted, and approved by experienced APR consultants – our approach to AI centres on AI-in-the-loop assurance and adherence to APR’s professional standards.

  1. Confidentiality

We treat client confidentiality and data security as paramount. No sensitive client information is ever entered into public or third-party AI platforms without explicit permission.

  1. Transparency

We are upfront with clients and stakeholders about when, how, and why we use AI to support our human-led services. Where appropriate, this includes clearly disclosing the role AI plays in research, content development, and our range of communications outputs.

  1. Objectivity

We regularly audit AI use to guard against bias and uphold fair, inclusive, and accurate communication. Outputs are verified against trusted sources to ensure factual integrity.

  1. Compliance

We adhere to all applicable privacy laws, industry ethical standards, and our own company values. Our approach to AI governance is continuously updated as technology and regulation evolve.

  1. Education

Our team stays up to date on emerging AI tools and risks. An internal working group regularly reviews best practices and ensures responsible and optimal use of evolving technologies.

This framework is a living document that adapts as technology and regulations evolve. The six pillars provide structure while allowing flexibility for innovation. We’ve learned transparency builds trust. Clients appreciate knowing when AI assists in their projects, understanding it means more human time for strategic thinking.

Most importantly, we’ve recognised our policy must balance innovation with responsibility. As new tools emerge and capabilities expand, we evaluate them against our core principle: does this enhance our ability to deliver exceptional service while maintaining the trust our clients place in us?

The answer guides every decision, ensuring our AI adoption serves our mission rather than defining it.

For more on our approach and regular updates on all things AI reputation, head to Alexander PR’s website or subscribe to the AI Rep Brief newsletter.



Source link

Continue Reading

Trending