When it comes to AI, as California goes, so goes the nation. The biggest state in the US by population is also the central hub of AI innovation for the entire globe, home to 32 of the world’s top 50 AI companies. That size and influence have given the Golden State the weight to become a regulatory trailblazer, setting the tone for the rest of the country on environmental, labor, and consumer protection regulations — and more recently, AI as well. Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a limited window of opportunity to set the stage for the rest of the country’s AI laws.
Tools & Platforms
AI will reshape internet, create jobs in West Virginia says High Technology Foundation's Estep – WV News

Tools & Platforms
The debate behind SB 53, the California bill trying to prevent AI from building nukes

This week, the California State Assembly is set to vote on SB 53, a bill that would require transparency reports from the developers of highly powerful, “frontier” AI models. The models targeted represent the cutting-edge of AI — extremely adept generative systems that require massive amounts of data and computing power, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude. The bill, which has already passed the state Senate, must pass the California State Assembly before it goes to the governor to either be vetoed or signed into law.
AI can offer tremendous benefits, but as the bill is meant to address, it’s not without risks. And while there is no shortage of existing risks from issues like job displacement and bias, SB 53 focuses on possible “catastrophic risks” from AI. Such risks include AI-enabled biological weapons attacks and rogue systems carrying out cyberattacks or other criminal activity that could conceivably bring down critical infrastructure. Such catastrophic risks represent widespread disasters that could plausibly threaten human civilization at local, national, and global levels. They represent risks of the kind of AI-driven disasters that have not yet occurred, rather than already-realized, more personal harms like AI deepfakes.
Exactly what constitutes a catastrophic risk is up for debate, but SB 53 defines it as a “foreseeable and material risk” of an event that causes more than 50 casualties or over $1 billion in damages that a frontier model plays a meaningful role in contributing to. How fault is determined in practice would be up to the courts to interpret. It’s hard to define catastrophic risk in law when the definition is far from settled, but doing so can help us protect against both near- and long-term consequences.
By itself, a single state bill focused on increased transparency will probably not be enough to prevent devastating cyberattacks and AI-enabled chemical, biological, radiological, and nuclear weapons. But the bill represents an effort to regulate this fast-moving technology before it outpaces our efforts at oversight.
SB 32 is the third state-level bill to try to specifically focus on regulating AI’s catastrophic risks, after California’s SB 1047, which passed the legislature only to be vetoed by the governor — and New York’s Responsible AI Safety and Education (RAISE) Act, which recently passed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.
SB 53, which was introduced by state Sen. Scott Wiener in February, requires frontier AI companies to develop safety frameworks that specifically detail how they approach catastrophic risk reduction. Before deploying their models, companies would have to publish safety and security reports. The bill also gives them 15 days to report “critical safety incidents” to the California Office of Emergency Services, and establishes whistleblower protections for employees who come forward about unsafe model deployment that contributes to catastrophic risk. SB 53 aims to hold companies publicly accountable for their AI safety commitments, with a financial penalty up to $1 million per violation.
In many ways, SB 53 is the spiritual successor to SB 1047, also introduced by Wiener.
Both cover large models that are trained at 10^26 FLOPS, a measurement of very significant computing power used in a variety of AI legislation as a threshold for significant risk, and both bills strengthen whistleblower protections. Where SB 53 departs from SB 1047 is its focus on transparency and prevention
While SB 1047 aimed to hold companies liable for catastrophic harms caused by their AI systems, SB 53 formalizes sharing safety frameworks, which many frontier AI companies, including Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its rules applying only to companies that generate $500 million or more in gross revenue.
“The science of how to make AI safe is rapidly evolving, and it’s currently difficult for policymakers to write prescriptive technical rules for how companies should manage safety,” said Thomas Woodside, the co-founder of Secure AI Project, an advocacy group that aims to reduce extreme risks from AI and is a sponsor of the bill, over email. “This light touch policy prevents backsliding on commitments and encourages a race to the top rather than a race to the bottom.”
Part of the logic of SB 53 is the ability to adapt the framework as AI progresses. The bill authorizes the California Attorney General to change the definition of a large developer after January 1, 2027, in response to AI advances.
Proponents of the bill are optimistic about its chances of being signed by the governor should it pass the legislature, which it is expected to. On the same day that Gov. Gavin Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier models. The resulting report by the group provided the foundation for SB 53. “I would guess, with roughly 75 percent confidence, that SB 53 will be signed into law by the end of September,” said Dean Ball — former White House AI policy adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.
But several industry organizations have rallied in opposition, arguing that additional compliance regulation would be expensive, given that AI companies should already be incentivized to avoid catastrophic harms. OpenAI has lobbied against it and technology trade group Chamber of Progress argues that the bill would require companies to file unnecessary paperwork and unnecessarily stifle innovation.
“Those compliance costs are merely the beginning,” Neil Chilson, head of AI policy at the Abundance Institute, told me over email. “The bill, if passed, would feed California regulators truckloads of company information that they will use to design a compliance industrial complex.”
By contrast, Anthropic enthusiastically endorsed the bill in its current state on Monday. “The question isn’t whether we need AI governance – it’s whether we develop it thoughtfully today or reactively tomorrow,” the company explained in a blog post. “SB 53 offers a solid path toward the former.” (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI, while Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic. Neither organization has editorial input into our content.)
The debate over SB 53 ties into broader disagreements about whether states or the federal government should drive AI safety regulation. But since the vast majority of these companies are based in California, and nearly all do business there, the state’s legislation matters for the entire country.
“A federally led transparency approach is far, far, far preferable to the multi-state alternative,” where a patchwork of state regulations can conflict with each other, said Cato Institute technology policy fellow Matthew Mittelsteadt in an email. But “I love that the bill has a provision that would allow companies to defer to a future alternative federal standard.”
“The natural question is whether a federal approach can even happen,” Mittelsteadt continued. “In my opinion, the jury is out on that but the possibility is far more likely that some suggest. It’s been less than 3 years since ChatGPT was released. That is hardly a lifetime in public policy.”
But in a time of federal gridlock, frontier AI advancements won’t wait for Washington.
The catastrophic risk divide
The bill’s focus on, and framing of, catastrophic risks is not without controversy.
The idea of catastrophic risk comes from the fields of philosophy and quantitative risk assessment. Catastrophic risks are downstream of existential risks, which threaten humanity’s actual survival or else permanently reduce our potential as a species. The hope is that if these doomsday scenarios are identified and prepared for, they can be prevented or at least mitigated.
But if existential risks are clear — the end of the world, or at least as we know it — what falls under the catastrophic risk umbrella, and the best way to prioritize those risks, depends on who you ask. There are longtermists, people focused primarily on humanity’s far future, who place a premium on things like multiplanetary expansion for human survival. They’re often chiefly concerned by risks from rogue AI or extremely lethal pandemics. Neartermists are more preoccupied with existing risks, like climate change, mosquito vector-borne disease, or algorithmic bias. These camps can blend into one another — neartermists would also like to avoid getting hit by asteroids that could wipe out a city, and longtermists don’t dismiss risks like climate change — and the best way to think of them is like two ends of a spectrum rather than a strict binary.
You can think of the AI ethics and AI safety frameworks as the near- and longtermism of AI risk, respectively. AI ethics is about the moral implications of the ways the technology is deployed, including things like algorithmic bias and human rights, in the present. AI safety focuses on catastrophic risks and potential existential threats. But, as Vox’s Julia Longoria reported in the Good Robot series for Unexplainable, there are inter-personal conflicts leading these two factions to work against each other, much of which has to do with emphasis. (AI ethics people argue that catastrophic risk concerns over-hype AI capabilities and ignores its impact on vulnerable people right now, while AI safety people worry that if we focus too much on the present, we won’t have ways to mitigate larger-scale problems down the line.)
But behind the question of near versus long-term risks lies another one: what, exactly, constitutes a catastrophic risk?
SB 53 initially set the standard for catastrophic risk at 100 rather than 50 casualties — similar to New York’s RAISE Act — before halving the threshold in an amendment to the bill. While the average person might consider, say, many people driven to suicide after interacting with AI chatbots to be catastrophic, such a risk is outside of the bill’s scope. (The California State Assembly just passed a separate bill to regulate AI companion chatbots by preventing them from participating in discussions about suicidal ideation or sexually explicit material.)
SB 53 focuses squarely on harms from “expert-level” frontier AI model assistance in developing or deploying chemical, biological, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “loss of control” scenarios where AIs go rogue, behaving deceptively to avoid being shut down and replicating themselves without human oversight. For example, an AI model could be used to guide the creation of a new deadly virus that infects millions and kneecaps the global economy.
“The 50 to 100 deaths or a billion dollars in property damage is just a proxy to capture really widespread and substantial impact,” said Scott Singer, lead author of the California Report for Frontier AI Policy, which helped inform the basis of the bill. “We do look at like AI-enabled or AI potentially [caused] or correlated suicide. I think that’s like a very serious set of issues that demands policymaker attention, but I don’t think it’s the core of what this bill is trying to address.”
Transparency is helpful in preventing such catastrophes because it can help raise the alarm before things get out of hand, allowing AI developers to correct course. And in the event that such efforts fail to prevent a mass casualty incident, enhanced safety transparency can help law enforcement and the courts figure out what went wrong. The challenge there is that it can be difficult to determine how much a model is accountable for a specific outcome, Irene Solaiman, the chief policy officer at Hugging Face, a collaboration platform for AI developers, told me over email.
“These risks are coming and we should be ready for them and have transparency into what the companies are doing,” said Adam Billen, the vice president of public policy at Encode, an organization that advocates for responsible AI leadership and safety. (Encode is another sponsor of SB 53.) “But we don’t know exactly what we’re going to need to do once the risks themselves appear. But right now, when those things aren’t happening at a large scale, it makes sense to be sort of focused on transparency.”
However, a transparency-focused bill like SB 53 is insufficient for addressing already-existing harms. When we already know something is a problem, the focus should be on mitigating it.
“Maybe four years ago, if we had passed some sort of transparency legislation like SB 53 but focused on those harms, we might have had some warning signs and been able to intervene before the widespread harms to kids started happening,” Billen said. “We’re trying to kind of correct that mistake on these problems and get some sort of forward-facing information about what’s happening before things get crazy, basically.”
SB 53 risks being both overly narrow and unclearly scoped. We have not yet faced these catastrophic harms from frontier AI models, and the most devastating risks might take us entirely by surprise. We don’t know what we don’t know.
It’s also certainly possible that models trained below 10^26 FLOPS, which aren’t covered by SB 53, have the potential to cause catastrophic harm under the bill’s definition. The EU AI Act sets the threshold for “systemic risk” at the smaller 10^25 FLOPS, and there’s disagreement about the utility of computational power as a regulatory standard at all, especially as models become more efficient.
As it stands right now, SB 53 occupies a different niche from bills focused on regulating AI use in mental healthcare or data privacy, reflecting its authors’ desire not to step on the toes of other legislation or bite off more than it can reasonably chew. But Chilson, the Abundance Institute’s head of AI policy, is part of a camp that sees SB 53’s focus on catastrophic harm as a “distraction” from the real near-term benefits and concerns, like AI’s potential to accelerate the pace of scientific research or create nonconsensual deepfake imagery, respectively.
That said, deepfakes could certainly cause catastrophic harm. For instance, imagine a hyper-realistic deepfake impersonating a bank employee to commit fraud at a multibillion-dollar scale, said Nathan Calvin, the vice president of state affairs and general counsel at Encode. “I do think some of the lines between these things in practice can be a bit blurry, and I think in some ways…that is not necessarily a bad thing,” he told me.
It could be that the ideological debate around what qualifies as catastrophic risks, and whether that’s worthy of our legislative attention, is just noise. The bill is intended to regulate AI before the proverbial horse is out of the barn. The average person isn’t going to worry about the likelihood of AI sparking nuclear warfare or biological weapons attacks, but they do think about how algorithmic bias might affect their lives in the present. But in trying to prevent the worst-case scenarios, perhaps we can also avoid the “smaller,” nearer harms. If they’re effective, forward-facing safety provisions designed to prevent mass casualty events will also make AI safer for individuals.
If SB 53 passes the legislature and gets signed by Gov. Newsom into law, it could inspire other state attempts at AI regulation through a similar framework, and eventually encourage federal AI safety legislation to move forward.
How we think about risk matters because it determines where we focus our efforts on prevention. I’m a firm believer in the value of defining your terms, in law and debate. If we’re not on the same page about what we mean when we talk about risk, we can’t have a real conversation.
Tools & Platforms
The EU AI Act is Here (Is Your Data Ready to Lead?)

The accelerated adoption of AI and generative AI tools has reshaped the business landscape. With powerful capabilities now within reach, organizations are rapidly exploring how to apply AI across operations and strategy.
In fact, 93% of UK CEOs have adopted generative AI tools in the last year, and according to the latest State of AI report by McKinsey, 78% of businesses use AI in more than one business function.
With such an expansion, governing bodies are acting promptly to ensure AI is deployed responsibly, safely and ethically. For example, the EU AI Act restricts unethical practices, such as facial image scraping, and mandates AI literacy. This ensures organizations understand how their tools generate insights before acting on them. These policies aim to reduce the risk of AI misuse due to insufficient training or oversight.
In July, the EU released its final General-Purpose AI (GPAI) Code of practice, outlining voluntary guidelines on transparency, safety and copyright for foundation models. While voluntary, companies that opt out may face closer scrutiny or more stringent enforcement. Alongside this, new phases of the act continue to take effect, with the latest compliance deadline taking place in August.
This raises two critical questions for organizations. How can they utilize AI’s transformative power while staying ahead of new regulations? And how will these regulations shape the path forward for enterprise AI?
How New Regulations Are Reshaping AI Adoption
The EU AI Act is driving organizations to address longstanding data management challenges to reduce AI bias and ensure compliance. AI systems under “unacceptable risk” — those that pose a clear threat to individual rights, safety or freedoms — are already restricted under the act.
Meanwhile, broader compliance obligations for general-purpose AI systems are taking this year. Stricter obligations for systemic-risk models, including those developed by leading providers, follow in August 2026. With this rollout schedule, organizations must move quickly to build AI readiness, starting with AI-ready data. That means investing in trusted data foundations that ensure traceability, accuracy and compliance at scale.
In industries such as financial services, where AI is used in high-stakes decisions like fraud detection and credit scoring, this is especially urgent. Organizations must show that their models are trained on representative and high-quality data, and that the results are actively monitored to support fair and reliable decisions. The act is accelerating the move toward AI systems that are trustworthy and explainable.
Data Integrity as a Strategic Advantage
Meeting the requirements of the EU AI Act demands more than surface level compliance. Organizations must break down data silos, especially where critical data is locked in legacy or mainframe systems. Integrating all relevant data across cloud, on-premises and hybrid environments, as well as across various business functions, is essential to improving the reliability of AI outcomes and reduce bias.
Beyond integration, organizations must prioritize data quality, governance and observability to ensure that the data used in AI models is accurate, traceable and continuously monitored. Recent research shows that 62% of companies cite data governance as the biggest challenge to AI success, while 71% plan to increase investment in governance programmes.
The lack of interpretability and transparency in AI models remains a significant concern, raising questions around bias, ethics, accountability and equity. As organizations operationalise AI responsibly, robust data and AI governance will play a pivotal role in bridging the gap between regulatory requirements and responsible innovation.
Additionally, incorporating trustworthy third-party datasets, such as demographics, geospatial insights and environmental risk factors, can help increase the accuracy of AI outcomes and strengthen fairness with additional context. This is increasingly important given the EU’s direction toward stronger copyright protection and mandatory watermarking for AI generated content.
A More Deliberate Approach to AI
The early excitement around AI experimentation is now giving way to more thoughtful, enterprise-wide planning. Currently, only 12% of organizations report having AI-ready data. Without accurate, consistent and contextualised data in place, AI initiatives are unlikely to deliver measurable business outcomes. Poor data quality and governance limits performance and introduces risk, bias and opacity across business decisions that affect customers, operations, and reputation.
As AI systems grow more complex and agentic, capable of reasoning, taking action, and even adapting in real-time, the demand for trusted context and governance becomes even more critical. These systems cannot function responsibly without a strong data integrity foundation that supports transparency, traceability and trust.
Ultimately, the EU AI Act, alongside upcoming legislation in the UK and other regions, signals a shift from reactive compliance to proactive AI readiness. As AI adoption grows, powering AI initiatives with integrated, high-quality, and contextualised data will be key to long-term success with scalable and responsible AI innovation.
Tools & Platforms
The Tech Elites Trying to “Build Canada” Can Only Muster AI-Written Prose

The technology executive suffers from a unique affliction. Call it disruptivitis: he (it’s almost always a he) will stumble upon a well-trod idea, give it a new name, and then claim credit for its discovery. Often, this idea will involve privatizing a previously public good, placing an app between a customer and an existing product or service, or solving an intractable social problem in such a way that happens to line said executive’s pockets.
Most importantly, this idea is always a priori innovative, by virtue of its origin in the mind of a self-declared innovator—think Athena springing fully formed from Zeus’s forehead. Fortunately for those afflicted, disruptivitis is also the world’s only malady that enriches its sufferers, and the boy-kings of Silicon Valley are its patient zeroes. Elon Musk was the first person to think of subways; the brain trust at Uber recently dreamed up the bus; meanwhile, Airbnb’s leaders decided to go ahead and start listing hotel rooms. Someday soon, a nineteen-year-old Stanford dropout will invent the wheel and become a billionaire.
This plague has now crossed the forty-ninth parallel via something called Build Canada. Its founders insist Build Canada isn’t a lobby group and doesn’t represent “special interest groups,” although it includes a former senior Liberal staffer as co-founder and CEO, several former or current executives and employees at Shopify (one of the country’s most valuable companies), and various other tech- and business-adjacent figures. (Apparently, corporate interests aren’t “special.”) They describe Build Canada as a project that will, it seems, close up shop whenever the government finally sees the light and implements their ideas, which are spelled out via a series of “memos.”
The project has attracted attention in political and tech circles; Liberal prime minister Mark Carney even established a Build Canada cabinet committee, despite the fact that, according to reporting by The Logic, a number of the project’s founders have turned hard right and backed the Conservatives in the last election.
But the memos have received less notice—and that’s a problem. They’re the core of the project, spelling out, in detail, the goals and world views of its backers; they’re also instructive as literary artifacts, with their own tics and tells. Perhaps it’s time we read these memos with the care upon which they so stridently insist.
As of this writing, there are thirty-six Build Canada memos. They’re policy proposals, basically, but they’re also intended to be works of political rhetoric, crafted (although, as we’ll see, “generated” might be the more apt verb) by people who believe that prose can move power. More than anything, though, the memos evoke the post-literate era’s most influential rhetorical form: the tech start-up pitch deck.
For one thing, the memos are utterly disinterested in language itself and seem to be pitched at someone with the attention span of a ketamine-addled venture capitalist. Many would require the translation services of a Y Combinator alumnus, with a lot of thoughts on “seconding employees” and “micromobility solutions,” as well as suggestions for “transition validated technologies” and a “follow-on non-dilutive capital program.” One representative passage: “Today in 2025, LCGE and CEI’s true combined cap is only $1.25M. And while QSBS shields 100% of gains up until the policy cap for individuals and corporations, Canada’s CEI would only shields [sic] 66.7% of gains for individuals.” Not exactly Two Treatises of Government or What Is to Be Done? A prior version of the Build Canada website said unnamed “experts” review each memo before publication, but expert editors don’t seem to be among them. Even government white papers have more flair.
This raises an important question, one crucial to any work of rhetoric: Who are these memos—with their gumbo of lofty self-regard, change-the-world ambition, and Instagram-reel reading level—actually for? If they’re intended for a general audience, aiming to inspire the Canadian public to rally around such stirring, big-tent goals as stablecoin adoption and capital gains reform, why do they dwell on “structured procurement pathway” and “major process driven services”? If, on the other hand, they’re intended as private lobbying tools, for a small audience of elected officials and aides, why make a whole-ass website?
The simplest explanation: the people behind Build Canada are too online. Its founders say they got together because “We got sick of sharing bold ideas on social media, in private chats and political events, and seeing nothing happen.” Now, most normal people, upon typing a sentence like that, would be self-aware enough to step away from the keyboard, take up an interesting hobby like cross stitching or Warhammer, and never speak of this brief lapse in judgment again. (Tellingly, that line has since been scrubbed from the Build Canada website.) But, remember, the technology executive is not like you or me. His ideas are always bold—which means their lack of implementation is not just a personal affront but open defiance of the natural order. It should be enough for him to tweet these ideas and leave the details to the peons.
Like so many terminally online posters before them, though, Build Canada’s founders have mistaken an audience of social media sycophants for a popular base of support. The great robber barons of old at least had the decency and good sense to stay behind the curtain. But, for today’s wealthy, influence isn’t enough. They want credit too. Musk posted a lot on Twitter; then he bought Twitter; then he bought a president. Build Canada founders appear to be on the same path—although, like proper Canadians, they’re still playing catch-up with the Americans.
If the memos are supposed to be works of persuasion, one has to ask: Why are they so poorly written? The obvious answer is that they’re produced with the help of generative artificial intelligence. Build Canada admits this. “It’s an experiment in how we could be doing things,” co-founder Daniel Debow has said, an excuse that red-handed undergraduates might want to keep on mental file. Indeed, the memos bear all of a chatbot’s hallmarks: bulleted lists, bolded headers, circular logic, business-school jargon, pleonasms, repetition. The generalizations are sweeping, the ideas visionary—albeit within a circumscribed vocabulary. Build Canada’s proposals are frequently “bold” (twenty-one uses, by my count). The country is in “crisis” (thirty-five), but it would be “world-class” (twenty) if not for all those “outdated” (eighteen) regulations and policies, although the most pressing issues at hand are “investment” (195), “innovation” (109), and “productivity” (forty-two), rather than, say, climate change (three) or poverty (three).
Build Canada’s reliance on AI isn’t surprising, since it seems to be the project’s glue, both the solution to government waste and a God-given right. (The irony of a large language model extolling its own virtues goes unremarked upon.) It’s also the future of art and entertainment, per one disquieting memo that advocates the redirection of cultural funding toward AI-related “content.” “Shift emphasis from rewarding sheer volume or traditional labour inputs towards incentivizing projects demonstrating innovative human-AI collaboration, development of Canadian AI creative tools, and global competitiveness,” the memo intones, in chillingly businesslike terms. “Redirect a portion of existing funds from less impactful programs towards these AI-readiness priorities.”
Build Canada’s founders point out, again and again, that they’re doing this on a volunteer basis, simply because they care about the country so much. If that’s true, why can’t they be bothered to write anything themselves, rather than turning to a chatbot? For all their complaints about “inertia” and “small thinking” holding the country back, it’s hard to imagine anything more inert or small minded than leaning on AI to churn out a couple of unremarkable paragraphs. Contempt for language is a form of contempt for the reader, and the overriding tone of the Build Canada memos is one of annoyance at having to spell out all these self-evident ideas for us little people.
If the style of the Build Canada memos leaves something to be desired, what about the substance—the policy ideas themselves? Some are good, or unobjectionable, or common sense. Canada should produce more food locally. Canadian telecoms have a monopolistic stranglehold on the market. Canadians should control their financial data. Canada needs high-speed rail and more housing. If you’re a normal person, you might believe that the reason these problems haven’t been fixed is that certain powerful players have certain economic incentives to oppose certain reforms—which results in those reforms being stymied. You might then draw the conclusion that the chief issue is greed and malice.
According to Build Canada, you’d be wrong. Who cares if, say, the housing crisis isn’t solely caused by a shortage of units but—to name a few other hypothetical culprits—the rise of corporate landlordism, a staggering drop in affordable and social housing stock, and an equally staggering decline in consumer purchasing power? Never mind. The only problem is all that pesky red tape. Might the Canadian consumer’s lack of financial data portability have something to do with the outsize political power of the country’s biggest banks? Let’s not get into that. In Build Canada’s world, there are almost no entrenched interests (except, that is, for public sector employees). The problem is always big government and low ambition.
If you lack the serene benevolence of the technology executive, some of Build Canada’s other proposals might give you pause. Again, though, that’s a you problem. Are you worried about the high rates of accidents from self-driving cars, or fires from e-bike battery meltdowns, or the accessibility hazards posed by electric scooters? You’re a NIMBY. Do you suspect that cryptocurrencies are really just unregulated financial securities? You’re living in the past. Are you weirded out by the idea of only funding artists who “celebrate Canadian achievement and ambition”? You’re short sighted. Are you troubled by the climate-change impact of fast-tracking every major fossil fuel project in the country? You’re unrealistic. Are you creeped out by a points-based rewards system for new immigrants? You’re soft. Do you have reservations about the wholesale embrace of generative artificial intelligence, given its long-term implications for employment, energy use, and the survival of the human spirit? You’re out of touch.
If, however, you have certain “outdated” ideas about any of the issues tackled by the Build Canada genius bar—if, for example, you believe that the clear-and-present climate catastrophe might require stopping new pipeline development rather than accelerating it, or that a technology like AI should be safely regulated rather than handed over for Pandora to crank open—you might be led to the conclusion that Build Canada has a very specific reason for blaming all the country’s ills on laziness and bureaucracy. In fact, you might begin to suspect that its founders are pointing the finger at everyone except themselves. You might notice that Build Canada has next to nothing to say about, for example, income inequality. You might wonder if—hypothetically—this has something to do with the class interests and net worth of its founders.
You might even allow your mind to wander down unexpected pathways—the sorts of meanderings and sense-memory flashbacks of which AI chatbots are, mercifully, not yet capable—until, for some reason, you realize that “Build Canada” has the same cadence as “Blame Canada,” the classic song from 1999’s South Park: Bigger, Longer & Uncut. And, in another surprising mental leap, you might then recall the song’s final line, which, for reasons you can’t quite put a finger on, sounds awfully apt right now: “We must blame them and cause a fuss / before somebody thinks of blaming us.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi