Connect with us

AI Insights

YouTube backlash begins: “Why is AI combing through every single video I watch?”

Published

on



Among concerned users fighting to block AI age checks is the petition starter, an anonymous YouTuber who runs a monetized account exploring video game lore called “Gerfdas Gaming” (who, for simplicity’s sake, we’ll refer to as Gerfdas).

Gerfdas told Ars that YouTube’s appeal process “raises major privacy concerns,” leaving YouTubers wondering, “where is this sensitive data stored, and how secure is it?”

“If YouTube suffers a breach, people’s names, IDs, and faces could end up in the wrong hands,” Gerfdas suggested.

Gerfdas also takes issue with the AI age verification system itself, noting that any monetized account already shares personal information with YouTube, but it’s disturbing to think that the AI is scanning every user’s viewing habits in the background just to catch some kids improperly using the platform. Several commenters on the petition noted that the AI age checks seemed to be created mainly to appease parents who struggle to police their own kids’ viewing habits, repeatedly asking, “Isn’t this why they made YouTube Kids?”

“Even without requesting ID, why is an AI combing through every single video I watch?” Gerfdas posited. “As an adult, I should be able to watch what I want within the law—and if the viewer is a child, that responsibility belongs to their parents, not a corporation.”

YouTube did not respond to multiple requests to comment and so far has not acknowledged Gerfdas’ petition. But Gerfdas is hoping that enough backlash may force YouTube to rethink its AI age checks, telling Ars that “even if they don’t respond right away, we’ll keep making noise until they do.”

Adult YouTubers defend childish viewing habits

As Ars monitored, hundreds of self-described YouTubers joined Gerfdas’ petition hourly. Gerfdas told Ars the petition’s popularity suggested that “this isn’t just a YouTube issue.” As age checks become more commonplace across the Internet due to regulatory pressure globally, people motivated to defend digital freedom are balking and increasingly banding together, Gerfdas said.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI’s Uncertain Cost Effects in Health Care | American Enterprise Institute

Published

on


The health care industry has a long history of below-average productivity gains, but there is cautious optimism that artificial intelligence (AI) will break the pattern. As in the past, the industry’s misaligned incentives might stymie progress. 

A 2024 economic study found that existing AI platforms could deliver up to $360 billion in annual cost reductions without harming the quality of care delivered to patients. If realized, the financial relief for employers, consumers, and taxpayers would not be trivial. 

The potential uses of AI in health care are numerous. AI could streamline the reading of diagnostic images, speed up accurate identification of complex conditions (and thus reduce the need for more testing), eliminate repetitive back-off tasks, prevent payments for unneeded services, target fraud, and less expensively identify drug compounds with potential therapeutic value. The savings from these applications are not theoretical; market participants are already using existing AI tools to pursue each of these objectives.

But there are two sides to the health care negotiating table, and the other side—hospitals, physician practices, and publicly-subsidized insurance plans looking to maximize their revenue—can leverage AI too. The net effect remains uncertain and will depend on which side of the table is most effective at leveraging the technology’s power. 

AI scribes are an example of a use that could go either way. The tool will save time for doctors and their support staff by quickly and easily translating audio notes from patient encounters into data entries for electronic health records. At the same time, a recent news story noted that AI scribes also facilitate “chart reviews” aimed at ensuring no services that can be billed to insurance plans are missed. In effect, the industry is discovering that AI scribes are more effective than humans at maximizing practice revenue. 

Medicare Advantage (MA) plans are sure to use AI in a similar way to boost the adjustment scores which, affect their monthly capitated payments from the Medicare program. 

While potentially powerful, AI does not solve the basic problem in health care, which is that there are weak incentives for cost control. 

In employer-sponsored insurance (ESI), higher costs are partially subsidized by a federal tax break which grows in value with the expense of the plan. In traditional Medicare, hospitals and doctors get paid more when they provide more services. If AI were used to eliminate unnecessary care, provider incomes would fall dramatically, which is why facilities and clinicians are more likely to use the technology to justify providing more care with higher prices for patients than to become more efficient. 

Insurers would seem to have a stronger incentive for cost control, but their main clients—employers and workers—are mostly interested in broad provider networks, not cost control. Insurers can earn profits just as easily when costs are high as when they are low. 

If AI is to lead to lower costs, the government and employers will need to deploy it aggressively to identify unnecessary spending, and then incentivize patients to migrate toward lower-cost insurance and care options. 

For instance, employers could use AI to pore through pricing data made available by transparency rules to identify potential cost-cutting opportunities for their workers. That, however, is only step one. Step two should be a change in plan design that rewards workers—who use the information AI uncovers to choose hospitals and doctors that can deliver the best value at the lowest cost. The savings from lower-priced care should be shared with workers through lower cost-sharing and premiums. 

The government should implement similar changes in Medicare, either through existing regulatory authority or through changes in law approved by Congress. 

With patients incentivized to seek out lower-cost care, hospitals and doctors would be more willing to use AI to identify cost-cutting strategies. For instance, AI could be used to design care plans for complex patients that minimize overall costs, or to offer more aggressive preventive care to patients with health risks identified by AI tools. 

Health care is awash with underused data. Patient records include potentially valuable information that could be harnessed to prevent emerging problems at far less cost than would be the case for treating the conditions after they have begun to inflict harm. In other words, AI might be used to vastly improve patient outcomes while also reducing costs. 

But this upending of the industry will not occur if all of the major players would rather stick with business as usual to protect their bottom lines. 

Congress should keep all of this in mind when considering how best to ensure AI delivers on its potential in health care. The key is to change incentives in the market so that those providers who use AI to cut their costs are rewarded with expanded market shares rather than lost revenue.



Source link

Continue Reading

AI Insights

US Senator Ted Cruz Proposes SANDBOX Act to Waive Federal Regulations for AI Developers

Published

on


US Senator Ted Cruz (R-TX), chairman of the Senate Commerce Committee, at a hearing titled “AI’ve Got a Plan: America’s AI Action Plan” on Wednesday, September 10, 2025.

On Wednesday, United States Senator Ted Cruz (R-TX) unveiled the “Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation Act,” or the SANDBOX Act. The 41-page bill would direct the director of the White House Office of Science and Technology Policy (OSTP) to establish a federal “regulatory sandbox” for AI developers to apply for waivers or modifications on compliance with federal regulations in order to test, experiment with, or temporarily offer AI products and services.

In a statement, Cruz said the legislation is consistent with the goals of the Trump administration’s AI Action Plan, which was released in July, and is the first step toward a “new AI framework” that can “turbocharge economic activity, cut through bureaucratic red tape, and empower American AI developers while protecting human flourishing.”

The bill would create a mechanism for companies to apply to the OSTP director for a waiver or modification to rules or regulations under any federal agency “that has jurisdiction over the enforcement or implementation of a covered provision for which an applicant is seeking a waiver or modification” under the sandbox program. Waivers or modifications would be granted for a two-year period, with four potential renewals totaling up to a decade.

Applicants under the program must demonstrate that “how potential benefits of the product or service or development method outweigh the risks, taking into account any mitigation measures,” including descriptions of “foreseeable risks” such as “health and safety,” “economic damage,” and “unfair or deceptive trade practices.” Applicants that receive a waiver are not immune to civil or criminal liability that may result from the deployment of their AI product or service. The bill requires mandatory incident reporting under a public disclosure mechanism.

Federal agencies are given 90 days to review applications. If an agency does not submit a decision or seek an extension by the deadline, the OSTP director is permitted to presume that the agency does not object. If an application is denied, it can be appealed.

The bill also includes a provision for Congressional review of rules and regulations that “should be amended or repealed as a result of persons being able to operate safely without those covered provisions” under the sandbox program. The OSTP director is tapped to identify any such provisions in a “special message” to Congress submitted each year.

The bill also contemplates coordination with “State programs that are similar or comparable to the Program,” including to “accept joint applications for projects benefitting from both Federal and State regulatory relief” and to harmonize other aspects of the program.

The Senate Commerce Committee’s announcement said the bill is backed by groups including the Abundance Institute, the US Chamber of Commerce, and the Information Technology Council (ITI). Public Citizen, a watchdog group, said in a statement that the bill puts public safety on the “chopping block” in favor of “corporate immunity.”

The announcement of the bill was timed alongside a Senate Commerce hearing titled “AI’ve Got a Plan: America’s AI Action Plan,” which featured testimony from OSTP director Michael Kratsios. During the hearing, Cruz laid out a legislative agenda on AI, including reducing the regulatory burden on AI developers. But, he said, AI developers should still face consequences if they create harm.

“A regulatory sandbox is not a free pass,” said Cruz. “People creating or using AI still have to follow the same laws as everyone else. Our laws are adapting to this new technology.”

In response to a question from Cruz, Kratsios said he would support the approach described by the SANDBOX Act.

The new legislation follows a failed effort by Cruz and other Republicans to impose a sweeping moratorium on the enforcement of state laws regulating artificial intelligence. Earlier this year, the House passed the moratorium as part of the so-called “One Big, Beautiful” bill, or HR 1. After efforts by Cruz to move the measure through the Senate by tying it to the allocation of $42 billion in funding for the Broadband Equity and Access Deployment (BEAD) program, the chamber voted 99-1 to strip it out of the budget bill prior to passage. Still, some experts remain concerned that the administration may try to use other federal levers to restrict state AI laws.



Source link

Continue Reading

AI Insights

AI workers are boosting rents across the US

Published

on


The newest wave of tech workers isn’t just filling office towers — it’s bidding up apartments in cities already notorious for high housing costs.

Across the US and Canada, the number of workers with artificial intelligence skills has surged by more than 50% in the past year, topping 517,000, according to CBRE. 

Much of that growth is clustered in the San Francisco Bay Area, New York City, Seattle, Toronto and the District of Columbia — areas where rents were straining households even before the AI boom.

The result: a fresh wave of demand that has helped push Manhattan rents up more than 14% between 2021 and 2024, Washington more than 12% in that same span, Seattle more than 7% and San Francisco nearly 6%.

Artificial intelligence is not just transforming technology, it’s driving up housing costs in the cities where it clusters. Kaikoro – stock.adobe.com
The number of AI-skilled workers in the US and Canada jumped more than 50% in the past year to 517,000, with the bulk concentrated in San Francisco, New York, Seattle, Toronto and Washington, according to CBRE. ihorvsn – stock.adobe.com
These talent hubs are seeing a double squeeze: office demand has rebounded as AI companies insist on in-person work, while an influx of high-paid workers is bidding up apartment rents. Getty Images

New York gained about 20,000 AI-skilled workers over the past year alone, while other hubs including Atlanta, Chicago, Dallas-Fort Worth, Toronto and Washington each logged increases of 75% or more. 

High salaries in AI allow workers to shoulder those rents — CBRE found Manhattan’s AI professionals spend about 29% of their income on housing, while in San Francisco and DC the share drops closer to 19%.

That affordability for one group is adding to the squeeze on everyone else.

Manhattan rents climbed more than 14% between 2021 and 2024. Christopher Sadowski
DC rents rose 12% between 2021 and 2024. SeanPavonePhoto – stock.adobe.com
San Francisco rents climbed nearly 6%. Getty Images

Colin Yasukochi, executive director of CBRE’s Tech Insights Center, said San Francisco illustrates the trend. 

“With this AI revolution, it’s been a fundamental game changer for the city of San Francisco, because that’s really ground zero for the AI revolution and where most of these major high-profile firms like OpenAI are located,” he told CNBC.

Unlike other parts of the tech sector that turned to remote work, AI firms are filling office towers. In San Francisco, 1 out of every 4 square feet leased over the past two and a half years went to an AI tenant. 

For many renters, the surge is painful, but AI salaries cushion the blow for those in the industry Wanan – stock.adobe.com
CBRE found Manhattan’s AI workers still spend only about 29% of income on rent; in San Francisco and DC it’s closer to 19%. Allen.G – stock.adobe.com

“AI is predominantly in-office work, and they’re sort of back to the earlier days of tech innovation, where they’re in the office five, six days a week and for long hours,” Yasukochi said.



Source link

Continue Reading

Trending