Connect with us

AI Insights

How Math Teachers Are Making Decisions About Using AI

Published

on


Our Findings

Finding 1: Teachers valued many different criteria but placed highest importance on accuracy, inclusiveness, and utility. 

We analyzed 61 rubrics that teachers created to evaluate AI. Teachers generated a diverse set of criteria, which we grouped into ten categories: accuracy, contextual awareness, engagingness, fidelity, inclusiveness, output variety, pedagogical soundness, user agency, and utility. We asked teachers to rank their criteria in order of importance and found a relatively flat distribution, with no single criterion emerging as one that a majority assigned highest importance. Still, our results suggest that teachers placed highest importance on accuracy, inclusiveness, and utility. 13% of teachers listed accuracy (which we defined as mathematically accurate, grounded in facts, and trustworthy) as their top evaluation criterion. Several teachers cited “trustworthiness” and “mathematical correctness” as their most important evaluation criteria, and another teacher described accuracy as a “gateway” for continuing evaluation; in other words, if the tool was not accurate, it would not even be worth further evaluation. Another 13% ranked inclusiveness (which we defined as accessible to diverse cognitive and cultural needs of users) as their top evaluation criterion. Teachers required AI tools to be inclusive to both student and teacher users. With respect to student users, teachers suggested that AI tools must be “accessible,” free of “bias and stereotypes,” and “culturally relevant.” They also wanted AI tools to be adaptable for “all teachers.” One teacher wrote, “Different teachers/scenarios need different levels/styles of support. There is no ‘one size fits all’ when it comes to teacher support!” Additionally, 11% of teachers reported utility as their top evaluation criterion (defined as benefits of using the tool significantly outweigh the costs). Teachers who cited this criterion valued “efficiency” and “feasibility.” One added that AI needed to be “directly useful to me and my students.” 

In addition to accuracy, inclusiveness, and utility, teachers also valued tools that were relevant to their grade level or other context (10%), pedagogically sound (10%), and engaging (7%). Additionally, 8% reported that AI tools should be faithful to their own methods and voice. Several teachers listed “authentic,” “realistic,” and “sounds like me” as top evaluation criteria. One remarked that they wanted ChatGPT to generate questions for coaching colleagues, “in my voice,” adding, “I would only use ChatGPT-generated coaching questions if they felt like they were something I would actually say to that adult.” 

CODE

DESCRIPTION

EXAMPLES

Accuracy

Tool outputs are mathematically accurate, grounded in fact, and trustworthy.

Grounded in actual research and sources ( not hallucinations); mathematical correctness

Adaptability

Tool learns from data and can improve over time or with iterative prompting

Continue to prompt until it fits the needs of the given scenario; continue to tailor it!

Contextual Awareness

Tool is responsive and applicable to specific classroom contexts, including grade level, standards, or teacher-specified goals.

Ability to be specific to a context / grade-level / community

Engagingness

Tool evokes users’ interest, curiosity, or excitement.

A math problem should be interesting or motivate students to engage with the math

Fidelity

Tool outputs are faithful to users’ intent or voice.

In my voice- I would only use chatGPT- generated coaching questions if they felt like they were something I would actually say to that adult

Inclusiveness

Tool is accessible to diverse cognitive and cultural needs of users.

I have to be able to adapt with regard to differentiation and cultural relevance.

Output Variety

Tool can provide a variety of output options for users to evaluate or enhance divergent thinking.

Multiple solutions, not all feedback from chat is useful so providing multiple options is beneficial

Pedagogically Sound

Tool adheres to established pedagogical best practices.

Knowledge about educational lingo and pedagogies

User Agency

Tool promotes users’ control over their own teaching and learning experience.

It is used as a tool that enables student curiosity and advocacy for learning rather than a source to find answers.

Utility

Benefits of using the tool significantly outweigh the costs (e.g., risks, resource and time investment).

Efficiency – will it actually help or is it something I already know

Table 1. Codes for the top criteria, along with definitions and examples. 

Teachers expressed criteria in their own words, which we categorized and quantified via inductive coding.

We have summarized teachers’ evaluation criteria on the chart below:

Finding 2: Teachers’ evaluation criteria revealed important tensions in AI edtech tool design.

In some cases, teachers listed two or more evaluation criteria that were in tension with one another. For example, many teachers emphasized the importance of AI tools that were relevant to their teaching context, grade level, and student population, while also being easy to learn and use. Yet, providing AI tools with adequate context would likely require teachers to invest significant time and effort, compromising efficiency and utility. Additionally, tools with high degrees of context awareness might also pose risks to student privacy, another evaluation criterion some teachers named as important. Teachers could input student demographics, Individualized Education Plans (IEPs), and health records into an AI tool to provide more personalized support for a student. However, the same data could be leaked or misused in a number of ways, including further training of AI models without consent. 

Another tension apparent in our data was the tension between accuracy and creativity. As mentioned above, teachers placed highest importance on mathematical correctness and trustworthiness, with one stating that they would not even consider other criteria if a tool was not reliably accurate or produced hallucinations. However, several teachers also listed creativity as a top criterion – a trait produced by LLMs’ stochasticity, which in turn also leads to hallucinations. The tension here is that while accuracy is paramount for fact-based queries, teachers may want to use AI tools as a creative thought-partner for generating novel, outside-the-box tasks – potentially with mathematical inaccuracies – that motivate student reasoning and discussion. 

Finding 3: A collaborative approach helped teachers quickly arrive at nuanced criteria. 

One important finding we observed is that, when provided time and structure to explore, critique, and design with AI tools in community with peers, teachers develop nuanced ways of evaluating AI – even without having received training in AI. Grounding the summit in both teachers’ own values and concrete problems of practice helped teachers develop specific evaluation criteria tied to realistic classroom scenarios. We used purposeful tactics to organize teachers into groups with peers who held different experiences with and attitudes toward AI than they did, exposing them to diverse perspectives they may not have otherwise considered. Juxtaposing different perspectives informed thoughtful, balanced evaluation criteria, such as, “Teaching students to use AI tools as a resource for curiosity and creativity, not for dependence.” One teacher reflected, “There is so much more to learn outside of where I’m from and it is encouraging to learn from other people from all over.” 

Over the course of the summit, several of our facilitators observed that teachers – even those who arrived with strong positive or strong negative feelings about AI – adopted a stance toward AI that we characterized as “critical but curious.” They moved easily between optimism and pessimism about AI, often in the same sentence. One teacher wrote in her summit reflection, “I’m mostly skeptical about using AI as a teacher for lesson planning, but I’m really excited … it could be used to analyze classroom talk, give students feedback … and help teachers foster a greater sense of community.” Another summed it up well: “We need more people dreaming and creating positive tools to outweigh those that will create tools that will cause challenges to education and our society as a whole.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

This Artificial Intelligence (AI) ETF Has Outperformed the Market By 2.4X Since Inception and Only Holds Profitable Companies

Published

on


For well under $100, you can buy one share of this under-the-radar AI exchange-traded fund (ETF) that looks poised to continue to outperform the market.

For this article, I asked myself: Where would I start investing if I had less than $100 to invest?

Image source: Getty Images.

An AI ETF that’s concentrated and full of leading and profitable companies

This answer to my question popped into my head: I’d want a concentrated exchange-traded fund (ETF) focused on leading and profitable companies heavily involved in artificial intelligence (AI), but with enough differences among themselves.

Why an ETF? Because I’d not want to put all my (investing) eggs in one basket.

Why AI? Because it’s poised to be the biggest secular trend in many decades or even generations.

Why concentrated? Because I believe if investors are going to buy a very diversified ETF, they might as well buy the entire market, so to speak, and buy an S&P 500 index ETF. Indeed, buying an S&P 500 index fund is a good idea for many investors, and recommended by investing legend Warren Buffett. That said, over the long run, I think an AI ETF full of only leading and profitable companies will beat the S&P 500 index.

Roundhill Magnificent Seven ETF (MAGS): Overview

And bingo! There is such an ETF — the Roundhill Magnificent Seven ETF (MAGS 1.92%). It has seven holdings — the so-called “Magnificent Seven” stocks: Alphabet (GOOG 4.38%) (GOOGL 4.53%), Amazon (AMZN 1.42%), Apple (AAPL 1.06%), Meta Platforms (META 1.18%), Microsoft (MSFT 1.01%), Nvidia (NVDA -0.10%), and Tesla (TSLA 3.54%). This ETF closed at $62.93 per share on Friday, Sept. 12.

These megacap stocks (stocks with market caps over $200 billion) were given the Magnificent Seven name a couple of years ago by a Wall Street analyst due to their strong growth and large influence on the overall market. The name comes from the title of a 1960 Western film.

Two other main traits I like about this ETF:

  • Its expense ratio is reasonable at 0.29%.
  • It provides equal-weight exposure to the seven stocks. At each quarterly rebalancing, the stocks will be reset to an equal weighting of about 14.28% (100% divided by 7).

Since its inception in April 2023 (almost 2.5 years), the Roundhill Magnificent Seven ETF has returned 160% — 2.4 times the S&P 500’s 65.9% return.

Roundhill Magnificent Seven ETF (MAGS): All stock holdings

Stocks are listed in order of current weight in portfolio. Keep in mind the ETF is rebalanced quarterly to make stocks equally weighted.

Holding No.

Company

Market Cap

Wall Street’s Projected Annualized EPS Growth Over Next 5 Years

Weight (% of Portfolio)

1 Year/ 10-Year Returns

1

Alphabet $2.9 trillion 14.7% 17.72% 55.9% / 677%

2

Nvidia $4.3 trillion 34.9% 15.00% 49.3% / 32,210%

3

Apple $3.5 trillion 8.8% 14.13% 5.6% / 812%

4

Tesla $1.3 trillion 13.4% 13.81% 72.3% / 2,270%

5

Amazon $2.4 trillion 18.6% 13.30% 22% / 762%
6 Meta Platforms $1.9 trillion 12.9% 13.16% 44.3% / 725%
7 Microsoft $3.8 trillion 16.6% 12.76% 20.3% / 1,250%

Overall ETF

N/A

Total net assets of $2.86 billion

N/A

100%

40.5% / N/A

N/A

S&P 500

N/A

N/A

N/A

19.2% / 300%

Data sources: Roundhill Magnificent Seven ETF, finviz.com, and YCharts. EPS = earnings per share. Data as of Sept. 12, 2025.

All these companies are profitable leaders in their core markets, and heavily involved in AI. Nvidia produces AI tech that enables others to use AI, while the other companies mainly use AI to improve their existing products and develop new ones.

Alphabet’s Google is the world leader in internet search. Its cloud computing business is No. 3 in the world, behind Amazon Web Services (AWS) and Microsoft Azure. The company also has other businesses, notably its driverless vehicle subsidiary, Waymo. (You can read here why I believe Nvidia is the best driverless vehicle stock.)

Nvidia is often described as the world’s leading maker of AI chips — and that it is. But it’s much more. It’s the world leader in supplying technology infrastructure for enabling AI. It’s also the global leader in graphics processing units (GPUs) for computer gaming.

Apple’s iPhone holds the No. 2 spot in the global smartphone market, behind Samsung. However, it dominates the U.S. market. The company’s services business is attractive, as it consists of recurring revenue and has been steadily growing.

Amazon operates the world’s No. 1 e-commerce business and the world’s No. 1 cloud computing business. It also has many other businesses, notably its Fresh and Amazon Prime Now (Whole Foods) grocery delivery operations.

Meta Platforms operates the world’s leading social media site, Facebook, as well as Instagram, Threads, and messaging app WhatsApp.

Microsoft’s Word has long been the world’s leading word processing software. Word is part of Microsoft Office, a suite of popular software for personal computers (PCs). Its Azure is the world’s second-largest cloud computing business.

Tesla remains the No. 1 electric vehicle (EV) maker, by far, in the U.S. despite struggling recently. In the first half of 2025, China’s BYD surpassed Tesla as the world’s leader in all-electric vehicles by number of units sold. CEO Elon Musk touts that the company’s robotaxi and Optimus humanoid robot businesses will eventually be larger than its EV sales business.

In short, the Roundhill Magnificent Seven ETF is poised to continue to benefit from the growth of artificial intelligence. Technically, it doesn’t have a long-term history. But if it had existed many years ago, it’s easy to tell that its long-term performance would be very strong because the long-term performances of all its holdings have been anywhere from great to spectacular.

Beth McKenna has positions in Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Meta Platforms, Microsoft, Nvidia, and Tesla. The Motley Fool recommends BYD Company and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Insights

OpenAI’s new GPT-5 Codex model takes on Claude Code

Published

on


OpenAI is rolling out the GPT-5 Codex model to all Codex instances, including Terminal, IDE extension, and Codex Web (chatgpt.com/codex).

Codex is an AI agent that allows you to automate coding-related tasks. You can delegate your complex tasks to Codex and watch it execute code for you.

Codex
Codex

Source: BleepingComputer.com

Even if you don’t know programming languages, you can use Codex to “vibe code” your apps and web apps.

But so far, it has fallen a bit short of Claude Code, which is the market leader in the AI coding space.

Today, OpenAI confirmed it’s rolling out the Codex-special GPT-5 model.

In a blog post, OpenAI stated the GPT-5 Codex model excels in real-world coding tasks, achieving a 74.5% success rate on the SWE-bench Verified benchmark.

MacBook

In code refactoring evaluations, it improved from 33.9% with GPT-5 to 51.3% with GPT-5-Codex.

GPT-5-Codex is still rolling out. I don’t see it on my Terminal yet, even though I pay for ChatGPT Plus ($20).

OpenAI says it will be fully rolled out to everyone in the coming days.

46% of environments had passwords cracked, nearly doubling from 25% last year.

Get the Picus Blue Report 2025 now for a comprehensive look at more findings on prevention, detection, and data exfiltration trends.



Source link

Continue Reading

AI Insights

Tech industry successfully blocks ambitious California AI bill | MLex

Published

on


By Amy Miller ( September 15, 2025, 23:52 GMT | Insight) — The deep-pocketed tech industry has proven once again that it can block efforts to regulate artificial intelligence, even in California. Even though California legislators approved more than a dozen bills aimed at regulating AI, from chatbot safety, to transparency, to data centers, several proposals attempting to put guardrails around AI died after facing concerted opposition, including the closely watched Automated Decisions Safety Act, which would have set new rules for AI systems that make consequential decisions about individuals.The deep-pocketed tech industry has proven once again that it can block efforts to regulate artificial intelligence, even in California….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

Trending