AI Insights
ChatGPT is testing disruptive Study Together feature

OpenAI’s “Study together” mode has been spotted in the wild, and it could help students prepare for exams directly from ChatGPT.
We don’t have the details yet, but references to ChatGPT Study Mode were first spotted in May, and testers noticed it widely earlier today.
The Study together mode, which doesn’t work right now, might allow students to either invite their friends to study on ChatGPT or have the AI act as a companion.
We just don’t know how it works yet, but it could disrupt the education niche.
Study together isn’t the only new feature coming to ChatGPT.
ChatGPT is also testing new connectors for GPT Search and Deep Research.
One of the upcoming connectors is Slack, which will allow ChatGPT Deep Research to crawl your Slack messages and use them as context in the research.
OpenAI has been quiet lately about its plans for ChatGPT, but we know that GPT-5 is coming, and it could be a big moment for the AI startup, as it struggles with its Microsoft deal and Meta’s poaching.
AI Insights
AI’s Uncertain Cost Effects in Health Care | American Enterprise Institute

The health care industry has a long history of below-average productivity gains, but there is cautious optimism that artificial intelligence (AI) will break the pattern. As in the past, the industry’s misaligned incentives might stymie progress.
A 2024 economic study found that existing AI platforms could deliver up to $360 billion in annual cost reductions without harming the quality of care delivered to patients. If realized, the financial relief for employers, consumers, and taxpayers would not be trivial.
The potential uses of AI in health care are numerous. AI could streamline the reading of diagnostic images, speed up accurate identification of complex conditions (and thus reduce the need for more testing), eliminate repetitive back-off tasks, prevent payments for unneeded services, target fraud, and less expensively identify drug compounds with potential therapeutic value. The savings from these applications are not theoretical; market participants are already using existing AI tools to pursue each of these objectives.
But there are two sides to the health care negotiating table, and the other side—hospitals, physician practices, and publicly-subsidized insurance plans looking to maximize their revenue—can leverage AI too. The net effect remains uncertain and will depend on which side of the table is most effective at leveraging the technology’s power.
AI scribes are an example of a use that could go either way. The tool will save time for doctors and their support staff by quickly and easily translating audio notes from patient encounters into data entries for electronic health records. At the same time, a recent news story noted that AI scribes also facilitate “chart reviews” aimed at ensuring no services that can be billed to insurance plans are missed. In effect, the industry is discovering that AI scribes are more effective than humans at maximizing practice revenue.
Medicare Advantage (MA) plans are sure to use AI in a similar way to boost the adjustment scores which, affect their monthly capitated payments from the Medicare program.
While potentially powerful, AI does not solve the basic problem in health care, which is that there are weak incentives for cost control.
In employer-sponsored insurance (ESI), higher costs are partially subsidized by a federal tax break which grows in value with the expense of the plan. In traditional Medicare, hospitals and doctors get paid more when they provide more services. If AI were used to eliminate unnecessary care, provider incomes would fall dramatically, which is why facilities and clinicians are more likely to use the technology to justify providing more care with higher prices for patients than to become more efficient.
Insurers would seem to have a stronger incentive for cost control, but their main clients—employers and workers—are mostly interested in broad provider networks, not cost control. Insurers can earn profits just as easily when costs are high as when they are low.
If AI is to lead to lower costs, the government and employers will need to deploy it aggressively to identify unnecessary spending, and then incentivize patients to migrate toward lower-cost insurance and care options.
For instance, employers could use AI to pore through pricing data made available by transparency rules to identify potential cost-cutting opportunities for their workers. That, however, is only step one. Step two should be a change in plan design that rewards workers—who use the information AI uncovers to choose hospitals and doctors that can deliver the best value at the lowest cost. The savings from lower-priced care should be shared with workers through lower cost-sharing and premiums.
The government should implement similar changes in Medicare, either through existing regulatory authority or through changes in law approved by Congress.
With patients incentivized to seek out lower-cost care, hospitals and doctors would be more willing to use AI to identify cost-cutting strategies. For instance, AI could be used to design care plans for complex patients that minimize overall costs, or to offer more aggressive preventive care to patients with health risks identified by AI tools.
Health care is awash with underused data. Patient records include potentially valuable information that could be harnessed to prevent emerging problems at far less cost than would be the case for treating the conditions after they have begun to inflict harm. In other words, AI might be used to vastly improve patient outcomes while also reducing costs.
But this upending of the industry will not occur if all of the major players would rather stick with business as usual to protect their bottom lines.
Congress should keep all of this in mind when considering how best to ensure AI delivers on its potential in health care. The key is to change incentives in the market so that those providers who use AI to cut their costs are rewarded with expanded market shares rather than lost revenue.
AI Insights
US Senator Ted Cruz Proposes SANDBOX Act to Waive Federal Regulations for AI Developers

US Senator Ted Cruz (R-TX), chairman of the Senate Commerce Committee, at a hearing titled “AI’ve Got a Plan: America’s AI Action Plan” on Wednesday, September 10, 2025.
On Wednesday, United States Senator Ted Cruz (R-TX) unveiled the “Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation Act,” or the SANDBOX Act. The 41-page bill would direct the director of the White House Office of Science and Technology Policy (OSTP) to establish a federal “regulatory sandbox” for AI developers to apply for waivers or modifications on compliance with federal regulations in order to test, experiment with, or temporarily offer AI products and services.
In a statement, Cruz said the legislation is consistent with the goals of the Trump administration’s AI Action Plan, which was released in July, and is the first step toward a “new AI framework” that can “turbocharge economic activity, cut through bureaucratic red tape, and empower American AI developers while protecting human flourishing.”
The bill would create a mechanism for companies to apply to the OSTP director for a waiver or modification to rules or regulations under any federal agency “that has jurisdiction over the enforcement or implementation of a covered provision for which an applicant is seeking a waiver or modification” under the sandbox program. Waivers or modifications would be granted for a two-year period, with four potential renewals totaling up to a decade.
Applicants under the program must demonstrate that “how potential benefits of the product or service or development method outweigh the risks, taking into account any mitigation measures,” including descriptions of “foreseeable risks” such as “health and safety,” “economic damage,” and “unfair or deceptive trade practices.” Applicants that receive a waiver are not immune to civil or criminal liability that may result from the deployment of their AI product or service. The bill requires mandatory incident reporting under a public disclosure mechanism.
Federal agencies are given 90 days to review applications. If an agency does not submit a decision or seek an extension by the deadline, the OSTP director is permitted to presume that the agency does not object. If an application is denied, it can be appealed.
The bill also includes a provision for Congressional review of rules and regulations that “should be amended or repealed as a result of persons being able to operate safely without those covered provisions” under the sandbox program. The OSTP director is tapped to identify any such provisions in a “special message” to Congress submitted each year.
The bill also contemplates coordination with “State programs that are similar or comparable to the Program,” including to “accept joint applications for projects benefitting from both Federal and State regulatory relief” and to harmonize other aspects of the program.
The Senate Commerce Committee’s announcement said the bill is backed by groups including the Abundance Institute, the US Chamber of Commerce, and the Information Technology Council (ITI). Public Citizen, a watchdog group, said in a statement that the bill puts public safety on the “chopping block” in favor of “corporate immunity.”
The announcement of the bill was timed alongside a Senate Commerce hearing titled “AI’ve Got a Plan: America’s AI Action Plan,” which featured testimony from OSTP director Michael Kratsios. During the hearing, Cruz laid out a legislative agenda on AI, including reducing the regulatory burden on AI developers. But, he said, AI developers should still face consequences if they create harm.
“A regulatory sandbox is not a free pass,” said Cruz. “People creating or using AI still have to follow the same laws as everyone else. Our laws are adapting to this new technology.”
In response to a question from Cruz, Kratsios said he would support the approach described by the SANDBOX Act.
The new legislation follows a failed effort by Cruz and other Republicans to impose a sweeping moratorium on the enforcement of state laws regulating artificial intelligence. Earlier this year, the House passed the moratorium as part of the so-called “One Big, Beautiful” bill, or HR 1. After efforts by Cruz to move the measure through the Senate by tying it to the allocation of $42 billion in funding for the Broadband Equity and Access Deployment (BEAD) program, the chamber voted 99-1 to strip it out of the budget bill prior to passage. Still, some experts remain concerned that the administration may try to use other federal levers to restrict state AI laws.
AI Insights
AI workers are boosting rents across the US

The newest wave of tech workers isn’t just filling office towers — it’s bidding up apartments in cities already notorious for high housing costs.
Across the US and Canada, the number of workers with artificial intelligence skills has surged by more than 50% in the past year, topping 517,000, according to CBRE.
Much of that growth is clustered in the San Francisco Bay Area, New York City, Seattle, Toronto and the District of Columbia — areas where rents were straining households even before the AI boom.
The result: a fresh wave of demand that has helped push Manhattan rents up more than 14% between 2021 and 2024, Washington more than 12% in that same span, Seattle more than 7% and San Francisco nearly 6%.
New York gained about 20,000 AI-skilled workers over the past year alone, while other hubs including Atlanta, Chicago, Dallas-Fort Worth, Toronto and Washington each logged increases of 75% or more.
High salaries in AI allow workers to shoulder those rents — CBRE found Manhattan’s AI professionals spend about 29% of their income on housing, while in San Francisco and DC the share drops closer to 19%.
That affordability for one group is adding to the squeeze on everyone else.
Colin Yasukochi, executive director of CBRE’s Tech Insights Center, said San Francisco illustrates the trend.
“With this AI revolution, it’s been a fundamental game changer for the city of San Francisco, because that’s really ground zero for the AI revolution and where most of these major high-profile firms like OpenAI are located,” he told CNBC.
Unlike other parts of the tech sector that turned to remote work, AI firms are filling office towers. In San Francisco, 1 out of every 4 square feet leased over the past two and a half years went to an AI tenant.
“AI is predominantly in-office work, and they’re sort of back to the earlier days of tech innovation, where they’re in the office five, six days a week and for long hours,” Yasukochi said.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi