Business
BenchSci cuts 23% of jobs, becoming latest company to replace humans with AI
BenchSci Analytics Inc. became one of Canada’s most promising and heavily funded startups by using artificial intelligence to help pharmaceutical giants cut time and costs from the drug discovery process.
Now the Toronto company is turning to AI to slash its own costs. Since May, BenchSci has cut 23 per cent of staff – about 83 jobs – as it goes all-in on adopting generative AI to do work formerly done by humans, the company said in an e-mail.
CEO Liran Belenzon signalled BenchSci’s commitment to generative AI in a July blog post. “As I often remind my team, those who fail to embrace AI risk being left behind – not by the technology itself but by peers who have mastered it,” he wrote. The company this year “shifted to become an AI-first company, which has become our guiding principle. Before adding new people or processes, we ask: ‘Could AI do this?’”
In the past two weeks alone, BenchSci cut its software engineering ranks to about 100 people, a 20-per-cent reduction, Mr. Belenzon said in an interview. It has rolled out company-wide tools including Gemini for Google Workspace and NotebookLM, and is using AI to automate repetitive workflows in its hiring practices and otherwise using AI to streamline operations and boost efficiencies.
“Ultimately, the goal is pretty straightforward: become more efficient and give our team the tools they need to be successful,” the memo states.
Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks
BenchSci previously cut its work force by 17 per cent in early 2024 in response to economic conditions and concerns over how the availability of generative AI tools like ChatGPT would affect its business.
As The Globe and Mail reported recently, several companies are pushing faster to adopt generative AI tools internally to increase productivity and save costs.
Tech CEOs in particular are worried about the competitive threat posed by new startups that can grow faster with fewer employees and less funding than in the past, thanks to generative AI tools. These applications can write computer code, make software prototypes, draft documents and reports, and review legal contracts, among other chores.
Advancements in AI agents, which can complete a series of tasks in one shot, are also opening up new opportunities to automate workflows.
Canadian tech companies such as League Inc. and Geotab Inc. are now requiring that employees use AI tools and are incorporating their usage into employee performance reviews. Other companies are trying to avoid bringing in new employees by getting more work done with AI instead. Vancouver business intelligence software provider Klue Labs Inc. let 40 per cent of its employees go in June in order to stay competitive in the AI era.
Is AI helping workers and improving productivity or just creating more work?
Mr. Belenzon told The Globe and Mail his generative AI push was inspired by Shopify CEO Tobi Lütke, who told his employees in a memo in April that using AI effectively “is now a fundamental expectation of everyone at Shopify.” Mr. Lütke further instructed that teams should only ask for more staff or resources after demonstrating they couldn’t get what they wanted done by using AI.
Mr. Belenzon said the staff cut is not related to any business challenges, noting that BenchSci recently hired serial U.S. technology entrepreneur John Jackson as chief technology officer and Peter Grandsard, former Amgen associate vice president of research, as senior vice-president of strategy. The company also added Pfizer’s former chief scientific officer Mikael Dolsten to its board two weeks ago. BenchSci is “growing and the business is strong and doing well” and set to announce significant developments in the coming months, the CEO said.
BenchSci, founded in 2015 by Mr. Belenzon and three others who met through the Creative Destruction Lab at the University of Toronto, has raised more than $215-million to date, backed by American investors including former U.S. vice-president Al Gore’s Generation Investment Management, private and public markets investment giant TCV, Google-backed Gradient Ventures and F-Prime Capital Partners, which is affiliated with fund giant Fidelity’s founding Johnson family. Canadian investors include Radical Ventures, Inovia Capital, Golden Ventures and Real Ventures.
BenchSci acts as an AI co-pilot for medical researchers, using AI to rapidly peruse millions of scientific publications to quickly determine which antibodies and reagents would be best to use in early experiments. More than half of the world’s largest pharmaceutical companies are BenchSci clients.
Business
New AI Privacy Guidance Makes Compliance Easier for Businesses

If you’re a business using or considering AI, there’s an important legal update you’ll want to know about—and it’s designed to make privacy compliance clearer and easier.
The Office of the Australian Information Commissioner (OAIC) has just published two new guides that explain how Australia’s existing privacy laws apply to AI, and what businesses need to do to stay on the right side of them.
Let’s break it down in plain English so it’s easy to understand—even if you’re still getting your head around AI!
What Was the Old Rule?
Before now, businesses have faced uncertainty about how privacy laws apply to AI tools—especially commercially available generative AI products that use personal information to train their models.
This created confusion about what steps to take to comply and how to select AI products that respect privacy.
There wasn’t clear guidance from regulators on how to balance innovation with privacy risks. Many organisations were left guessing if their AI usage was lawful or exposing them to privacy breaches.
What’s Changed?
The OAIC has stepped in with two new guides:
- Guide for Businesses Using AI Products: This helps businesses understand their privacy obligations when using AI tools, and offers practical tips on choosing AI products that meet privacy standards.
- Guide for AI Developers: This focuses on developers using personal information to train generative AI models, clarifying how privacy laws apply in that context.
These guides clearly articulate the OAIC’s expectations and outline what good privacy governance looks like when it comes to AI.
What Does This Mean for Your Business?
The key takeaway is that AI products shouldn’t be used just because they’re available.
Businesses must:
- Take a cautious approach, carefully assessing privacy risks
- Ensure robust privacy safeguards are in place
- Be transparent with customers about how their personal information is used in AI
- Verify that any AI-generated outputs comply with privacy laws
If you’re planning to use AI or already do, these guides give you a clear path to follow—and the OAIC is serious about enforcing compliance.
What Should You Do Now?
Here’s a quick checklist to help you stay compliant:
- Review your current or planned use of AI tools. Are you aware of what personal information they collect or process?
- Read the OAIC’s new guides to understand your obligations and best practices.
- Work with your legal or privacy team to put privacy governance measures in place—like risk assessments and data minimisation.
- Train your staff on privacy risks related to AI and how to handle data responsibly.
- Stay informed about upcoming privacy reforms, including potential new obligations on fair and reasonable use of personal information.
Key Takeaways
- Existing privacy laws apply fully to AI—there’s no special exemption just because it’s a new technology.
- The OAIC’s new guides clarify how those laws work with AI tools and development.
- Businesses must assess privacy risks and build safeguards before using AI.
- Transparency and accountability are essential to build trust and avoid penalties.
Frequently Asked Questions (FAQ)
1. Do privacy laws apply to all AI tools?
Yes. Australian privacy laws apply to any AI tool that collects, uses, or shares personal information. There are no special exceptions just because it’s AI.
2. What are the main privacy risks with AI?
Risks include accidental data leaks, using personal info without permission, AI generating incorrect or misleading results, and not being clear with customers about how their data is used.
3. How can my business comply with the new guidance?
Start by reading the OAIC’s guides, do a privacy risk check on your AI tools, protect personal data with strong security, train your staff on privacy best practices, and be transparent with your customers about AI use.
4. What happens if a business breaks the privacy rules?
The OAIC can investigate and take enforcement action, including fines. Breaking privacy rules can also harm your reputation and customer trust.
Business
The Caribbean island making millions from the AI boom

Jacob EvansBBC World Service

Back in the 1980s when the internet was still in its infancy, countries were being handed their own unique website addresses to navigate this nascent new online world. Such as .us for the US or .uk for the UK.
Eventually, almost every country and territory had a domain based on either its English or own language name. This included the small Caribbean island of Anguilla, which landed the address .ai.
Unbeknownst to Anguilla at the time, this would become a future jackpot.
With the continuing boom in artificial intelligence (AI), more and more companies and individuals are paying Anguilla, a British Overseas Territory, to register new websites with the .ai tag.
Such as US tech boss Dharmesh Shah, who earlier this year spent a reported $700,000 (£519,000) on the address you.ai.
Speaking to the BBC, Mr Shah says he purchased it because he had “an idea for an AI product that would allow people to create digital versions of themselves that could do specific tasks on their behalf”.
The number of .ai websites has increased more than 10-fold in the past five years, and has doubled in the past 12 months alone, according to a website that tracks domain name registrations.
The challenge for Anguilla, which has a population of just 16,000 people, is how to harness this lucrative bit of luck and turn it into a long-term and sustainable source of income.
Similar to other small Caribbean islands, Anguilla’s economy is built on a bedrock of tourism. Recently, it’s been attracting visitors in the luxury travel market, particularly from the US.
Anguilla’s statistics department says there was a record number of visitors to the island last year, with 111,639 people entering its shores.
Yet Anguilla’s tourism sector is vulnerable to damage from hurricanes every autumn. Situated in the northeast of the Caribbean island arc, Anguilla lies perfectly within the North Atlantic hurricane belt.
So gaining an increasing income from selling website addresses is playing an important role in diversifying the island’s economy, and making it more resilient to the financial damage that storms may bring. This is something that the International Monetary Fund (IMF) noted in a recent report on Anguilla.

In its draft 2025 budget document, the Anguillian government says that in 2024 it earned 105.5m East Caribbean dollars ($39m; £29m) from selling domain names. That was almost a quarter (23%) of its total revenues last year. Tourism accounts for some 37%, according to the IMF.
The Anguillian government expects its .ai revenues to increase further to 132m Eastern Caribbean dollars this year, and to 138m in 2026. It comes as more than 850,000 .ai domains are now in existence, up from fewer than 50,000 in 2020.
As a British Overseas Territory, Anguilla is under the sovereignty of the UK, but with a high level of internal self-governance.
The UK has significant influence on the island’s defence and security, and has provided financial assistance during times of crisis. After Hurricane Irma severely damaged it in 2017, the US gave £60m to Anguilla over five years to help meet the repair bill.
The UK’s Foreign, Commonwealth and Development Office tells the BBC it welcomes Anguilla’s efforts “to find innovative ways to deliver economic growth” as it helps “contribute to Anguilla’s financial self-sufficiency”.

To manage its burgeoning domain name income, in October 2024 Anguilla signed a five-year deal with a US tech firm called Identity Digital, which specialises in internet domain name registries.
At the start of this year, Identity Digital announced that it had moved where all the .ai domains are hosted, from servers in Anguilla, to its own global server network. This is to prevent any disruption from future hurricanes, or any other risks to the island’s infrastructure, such as power cuts.
The exact cost of .ai addresses isn’t publicly disclosed, but registration prices are said to start from roughly $150 to $200. With renewal fees of around the same amount every two years.
At the same time, more in-demand domain names are auctioned off, with some fetching hundreds of thousands of US dollars. The owners of these then have to pay the same small renewal fees as everyone else.
In all cases, the government of Anguilla gets the sales revenue, with Identity Digital getting a cut said to be around 10%. However, they appear to be sensitive about the topic, as both declined to be interviewed for this article.
Currently the most expensive .ai domain name purchase is Mr Shah’s you.ai.
A self-confessed AI-enthusiast, and co-founder of US software company Hubspot, Mr Shah has several other .ai domain addresses to his name, but the flagship you.ai is not yet operational as he’s been busy with other projects.
Mr Shah says he buys domain names for himself, but will occasionally look to sell “if I don’t have immediate plans for it, and there’s another entrepreneur that wants to do something with the name”.
Mr Shah believes that another person or company will soon set a new record for the highest price of an .ai domain purchase, such is the continuing excitement around AI.
But he adds: “Having said that, I still think over the long-term, .com domains will maintain their value better and for longer.”
In recent weeks, .ai auctions have seen major six-figure sales. In July, cloud.ai sold for a reported $600,000 and law.ai sold for $350,000 earlier this month.

However, Anguilla’s position is not without precedent. The similarly tiny Pacific island nation of Tuvalu signed an exclusive deal in 1998 to license its .tv domain name.
Reports say this granted exclusive rights to US domain name registry firm, VeriSign in exchange for $2m a year, which later rose to $5m.
A decade later and with the internet expanding exponentially, Tuvalu’s finance minister, Lotoala Metia, said VeriSign, paid “peanuts” for the right to run the domain name. The country signed a new deal with a different domain provider, GoDaddy, in 2021.
Anguilla is operating in a different fashion, having handed over management of the domain name in a revenue-sharing model, not a fixed payment.
Cashing in on this new line of income sustainably has been a major goal for the island. It’s hoped the increasing incomings will allow for a new airport to be built to facilitate tourism growth, as well as fund improvements to public infrastructure and access to health care.
As the number of registered .ai domains hurtles toward the million mark, Anguillians will hope this money is managed safely and invested in their future.
Business
Why our business is going AI-in-the-loop instead of human-in-the-loop

True story: I had to threaten Replit AI’s brain that I would report it’s clever but dumb suggestions to the AI police for lying.
I also told ChatGPT image creation department how deeply disappointed I was that it could not, after 24 hrs of iterations, render the same high-quality image twice without changing an item on the image or misspelling. All learnings and part of the journey.
We need to remain flexible and open to new tools and approaches, and simultaneously be laser focused. It’s a contradiction, but once you start down this road, you will understand. Experimentation is a must. But it’s also important to ignore the noise and constant hype and CAPS.
How our business’ tech stack evolves
A few years ago, we started with ChatGPT and a few spreadsheets. Today, our technology arsenal spans fifteen AI platforms, from Claude and Perplexity to specialised tools like RollHQ for project management and Synthesia for AI video materials. Yet the most important lesson we’ve learned isn’t about the technology itself. It’s about the critical space between human judgment and machine capability.
The data tells a compelling story about where business stands today: McKinsey reports that 72 percent of organizations have adopted AI for at least one business function, yet only one percent believe they’ve reached maturity in their implementation. Meanwhile, 90 percent of professionals using AI report working faster, with 80 percent saying it improves their work quality.
This gap between widespread adoption and true excellence defines the challenge facing every service organisation today, including our own.
Our journey began like many others, experimenting with generative AI for document drafting and research. We quickly discovered that quality was low and simply adding tools wasn’t enough. What mattered was creating a framework that put human expertise at the center while leveraging AI’s processing power. This led us to develop what we call our “human creating the loop” approach, an evolution beyond the traditional human-in-the-loop model. It has become more about AI-in-the-loop for us than the other way round.
The distinction matters.
Human-in-the-loop suggests people checking machine outputs. Human creating the loop means professionals actively designing how AI integrates into workflows, setting boundaries, and maintaining creative control. Every client deliverable, every strategic recommendation, every customer interaction flows through experienced consultants who understand context, nuance, and the subtleties that define quality service delivery.
Our evolving tech stack
Our technology portfolio has grown strategically, with each tool selected for specific capabilities.
Each undergoes regular evaluation against key metrics, with fact-checking accuracy being paramount. We’ve found that combining multiple tools for fact checking and verification, especially Perplexity’s cited sources with Claude’s analytical capabilities, dramatically improves reliability.
The professional services landscape particularly demonstrates why human judgment remains irreplaceable. AI can analyse patterns, generate reports, and flag potential issues instantly. But understanding whether a client concern requires immediate attention or strategic patience, whether to propose bold changes or incremental improvements; these decisions require wisdom that comes from experience, not algorithms.
That’s also leaving aside the constant habit of AI generalising, making things up and often blatantly lying.
For organisations beginning their AI journey, start with clear boundaries rather than broad adoption.
Investment in training will be crucial.
Research shows that 70 percent of AI implementation obstacles are people and process-related, not technical. Create internal champions who understand both the technology and your industry’s unique requirements.
Document what works and what doesn’t. Share learnings across teams. Address resistance directly by demonstrating how AI enhances rather than replaces human expertise.
The data supports this approach. Organisations with high AI-maturity report three times higher return on investment than those just beginning. But maturity doesn’t mean maximum automation. It means thoughtful integration that amplifies human capabilities.
Looking ahead, organisations that thrive will be those that view AI as an opportunity to elevate human creativity rather than replace it.
Alexander PR’s AI policy framework
Our approach to AI centres on human-led service delivery, as outlined in our core policy pillars:
- Oversight: Human-Led PR
We use AI selectively to improve efficiency, accuracy, and impact. Every output is reviewed, adjusted, and approved by experienced APR consultants – our approach to AI centres on AI-in-the-loop assurance and adherence to APR’s professional standards.
- Confidentiality
We treat client confidentiality and data security as paramount. No sensitive client information is ever entered into public or third-party AI platforms without explicit permission.
- Transparency
We are upfront with clients and stakeholders about when, how, and why we use AI to support our human-led services. Where appropriate, this includes clearly disclosing the role AI plays in research, content development, and our range of communications outputs.
- Objectivity
We regularly audit AI use to guard against bias and uphold fair, inclusive, and accurate communication. Outputs are verified against trusted sources to ensure factual integrity.
- Compliance
We adhere to all applicable privacy laws, industry ethical standards, and our own company values. Our approach to AI governance is continuously updated as technology and regulation evolve.
- Education
Our team stays up to date on emerging AI tools and risks. An internal working group regularly reviews best practices and ensures responsible and optimal use of evolving technologies.
This framework is a living document that adapts as technology and regulations evolve. The six pillars provide structure while allowing flexibility for innovation. We’ve learned transparency builds trust. Clients appreciate knowing when AI assists in their projects, understanding it means more human time for strategic thinking.
Most importantly, we’ve recognised our policy must balance innovation with responsibility. As new tools emerge and capabilities expand, we evaluate them against our core principle: does this enhance our ability to deliver exceptional service while maintaining the trust our clients place in us?
The answer guides every decision, ensuring our AI adoption serves our mission rather than defining it.
For more on our approach and regular updates on all things AI reputation, head to Alexander PR’s website or subscribe to the AI Rep Brief newsletter.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies