Connect with us

AI Research

Nazareth is instructing teachers how to take on artificial intelligence

Published

on


Colleges are preparing the next generation of teachers to manage artificial intelligence in the classroom, and at Nazareth University, that includes ways to use AI for teaching English to language learners.

It’s a shift in thinking for Rui Cheng, program director for the graduate Teaching English to Speakers of Other Languages (TESOL) program.

“I think the biggest controversy for us now is whether using AI will be helpful or detrimental to students’ learning,” Cheng said. “I don’t think there is like the one fixed answer. So, this is why we are still in the exploration stage. … We cannot just close our eyes and just pretend it’s not there.”

Risks like plagiarism and cheating exist regardless of whether teachers engage with the technology in class themselves, she said, adding that relying heavily on AI can have detrimental effects on long-term outcomes like literacy and communication skills.

“We know it’s inevitable,” she said. “AI is in everybody’s life now, but we want to try to help students to get into the mindset of collaborating with AI, not using AI to do the work for them.”

Now, Cheng and others are looking at how AI can complement students’ learning experience, like roleplaying for building conversation skills or assisting teachers with routine tasks.

“There’s a bit of a conflict,” Nazareth graduate student Alec Calabrese said. “I feel that many teachers are still pretty against AI, but I know a lot of academics are starting to come up with models of using AI as a tool in the classroom.”

Calabrese, who began teaching English language learners at Rochester Early College International School this school year, said it’s almost an arms race of sorts. Calabrese previously worked at a rural school in Connecticut.

“I know a lot of teachers are interested in making a shift toward actually assigning assignments where students would have to use AI to complete them,” he said. “But also, a lot of districts still completely block all AI tools to prevent plagiarism.”

Nazareth University is looking to add a certification program for applied educational leadership for technology and AI integration, Cheng said. The School of Education’s dean said it is in the early stages of development.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

2 Artificial Intelligence (AI) Leaders

Published

on

By


Key Points

  • Companies can’t get enough AI chips, and that spells more growth for Taiwan Semiconductor Manufacturing.

  • Apple has competitive advantages that could make it a sleeper AI stock to buy right now.

  • 10 stocks we like better than Taiwan Semiconductor Manufacturing ›

The artificial intelligence (AI) market is expected to add trillions to the global economy, and investors looking for rewarding buy-and-hold investments in the field don’t need to take high risks. Investing in companies that are supplying the computing hardware to power AI technology, as well as those that could benefit from growing adoption of AI-powered consumer products, could earn satisfactory returns. Here are two stocks to consider buying for the long term.

Image source: Getty Images.

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »

1. Taiwan Semiconductor Manufacturing

AI doesn’t work without the right chips to train computers to think for themselves. While Nvidia and Broadcom report strong growth, Taiwan Semiconductor Manufacturing (NYSE: TSM) is the one making the chips for these semiconductor companies. TSMC controls over 65% of the chip foundry market, according to Counterpoint, making it the default chip factory for smartphones, computers, and AI.

TSMC manufactures chips that are used in several other markets, including automotive and smart devices. This means that when one market is weak, such as automotive, strength from another (high-performance computing and AI, for example) can pick up the slack.

TSMC’s manufacturing capacity is immense. It can make 17 million 12-inch equivalent silicon wafers every year.

Its massive scale and expertise at making the most advanced chips in the world put it in a lucrative position. Over the last year, it earned $45 billion in net income on $106 billion of revenue. It has delivered double-digit annualized revenue growth over the last few decades, and management expects this growth to continue.

In the second quarter, revenue grew 44% year over year. This growth has pushed the stock up 51% over the past year. Management expects AI chip revenue to grow at an annualized rate in the mid-40s range over the next five years, which is a catalyst for long-term investors.

With Wall Street analysts expecting the company’s earnings per share to grow at an annualized rate of 21% in the coming years, the stock should continue to hit new highs, as it still trades at a reasonable forward price-to-earnings ratio (P/E) of 24.

2. Apple

Apple (NASDAQ: AAPL) hasn’t made a huge splash in AI yet. Apple Intelligence brought some useful features to its devices, such as AI summaries and image creation, but it’s not as robust as customers were expecting. However, investors shouldn’t count the most valuable consumer brand out just yet. Apple has a large installed base of active devices, and millions of customers trust Apple with their personal data, which could put it in a strong position to benefit from AI over the long term.

Apple previously partnered with OpenAI for ChatGPT integration across its products, but with OpenAI now positioning itself as a competitor after bringing in Apple’s former product designer Jony Ive, Apple is rumored to be exploring a partnership with Alphabet‘s Google’s Gemini to power its Siri voice assistant.

Apple appears to be a sleeping giant in AI. Millions of people are walking around with a device that Apple can turn into a super-intelligent assistant with a single software update. Its large installed base of over 2.35 billion active devices is a major advantage that shouldn’t be underestimated.

But Apple has another important advantage that other tech companies can’t match: consumer trust. Apple has built its brand around protecting user privacy, whereas Alphabet’s Google and Meta Platforms have profited off their users’ data to grow their advertising revenue. A partnership with Google for AI would not comprise Apple’s position on user privacy, since Google would need to provide a custom model that runs on Apple’s private cloud.

For these reasons, Apple is well-positioned to be a leader in AI, making its stock a solid buy-and-hold investment. It says a lot about its growth potential that analysts still expect earnings to grow 10% per year despite the fact that the company is lagging behind in AI. The stock’s forward P/E of 32 is on the high side, but that also reflects investor optimism about its long-term prospects.

Should you invest $1,000 in Taiwan Semiconductor Manufacturing right now?

Before you buy stock in Taiwan Semiconductor Manufacturing, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Taiwan Semiconductor Manufacturing wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $671,288!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,031,659!*

Now, it’s worth noting Stock Advisor’s total average return is 1,056% — a market-crushing outperformance compared to 185% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.

See the 10 stocks »

*Stock Advisor returns as of September 8, 2025

John Ballard has positions in Nvidia. The Motley Fool has positions in and recommends Alphabet, Apple, Meta Platforms, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.

Disclaimer: For information purposes only. Past performance is not indicative of future results.



Source link

Continue Reading

AI Research

Optibus announces expansion of generative AI capabilities for transit schedulin and operations

Published

on


Optibus announces it has introduced new capabilities to Optibus AI, that the company claims being the “first Generative Artificial Intelligence (GenAI) suite purpose-built for public and private operators and agencies”.

Optibus AI features a growing number of industry-specific AI agents fully integrated across the platform. “Tools are embedded where and when transportation professionals need them most. With intelligence layers for every stage of work, agencies and operators can redefine planning, scheduling, and beyond with intuitive, platform-native capabilities”, the Israeli tech company states.

Optibus expands Artificial Intelligence skills

Optibus suite suite includes a Generative AI agent that uses natural language to create complex scheduling rules: “Schedulers type preferences in plain language, such as “No more than ten duties over nine hours,” and Optibus instantly generates accurate, ready-to-use logic. No more specialized coding, configuration, or steep learning curves. Just fast, intuitive scheduling powered by AI”, reads Optibus announcement.

Preference Designer is said as reducing errors and rule configuration time by up to 70%.

Additional AI agents are on the way, including Schedule Analysis, allowing to compare optimized scenarios and charting the best path forward, and persuade stakeholders with presentation-ready analysis and actionable insights. Optibus is also set to introduce Multi-Step Automation.

According to Optibus’ industry survey, 95% of public transportation enterprises have explored artificial intelligence, but only 8% noted measurable impact. Across industries, insufficient integration into existing workflows is a key cause of unsuccessful AI trials. 

Amos Haggiag, CEO and co-founder of Optibus, said: “Generative AI is kicking-off a new chapter in how public transportation is planned and operated. Optibus AI turns decades of complex processes into simple, intuitive tools that empower teams at every level.”



Source link

Continue Reading

AI Research

The European Union is still caught in an AI copyright bind

Published

on


As part of the implementation of the European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689), the European Commission has published several implementation guidelines, including the final Code of Practice (CoP) for the Implementation of the AI Act, issued in July 2025. It has also published detailed guidelines for reporting on data used for AI model training and obligations for providers of general-purpose AI models.

The CoP covers three issues: transparency about the construction of AI models, how providers should deal with model safety and security, and how to comply with copyright in AI training data.

Compliance with the first two is easy. Frontier AI models can by now fill in their own transparency forms and run regular safety and security checks. This will not increase regulatory compliance costs beyond what AI modelers are already doing. 

Copyright is a thornier issue, however. For frontier AI models, more training data improves performance (Kaplan et al, 2023). Copyright obligations reduce the quantity of available data and, through licensing requirements, increase the price of model training data. Data access conditions also affect the EU’s global competitiveness in AI.

While some CoP copyright provisions make sense – a prohibition on reproducing copyright-protected content in model outputs; training on lawfully accessible data only – others are problematic. This includes being transparent about the origins of datasets used for model training and managing the rapidly rising number of copyright holders who have withdrawn (‘opted-out’) their data from model training (Longpre et al, 2024) in accordance with the EU Copyright Directive (Directive (EU) 2019/790). 

EU regulators are caught between EU copyright law and global competition between national AI regulations. They cannot modify the law in the short run to accommodate the data needs of AI models. But full application of the law would endanger EU access to the best AI models and services and erode competitiveness. The CoP is the latest attempt to square that circle.

The illusion of full transparency

Together with the AI Act, the CoP is intended to help improve transparency about the use of model training data. It assumes that this will facilitate licensing of copyright-protected data, thereby giving copyright holders a fair share of AI revenues. That wishful thinking ignores the problem of high transaction costs and the licensing fees that negotiating deals with millions of online rightsholders would generate for AI developers. For the multitude of small web publishers, transaction costs may exceed the value of licensing fees.

The model used for online advertising isn’t easily transferrable to AI licensing. Nor does collective licensing offer a solution (Geiger and Iaia, 2024), replacing individual pricing with single collective prices determined by intermediaries. Shifting the problem from the licensee to an intermediary copyright management organisation does not solve the issue of tracking and assigning value to content use.

Setting the overall collective license price for hundreds of AI models would also result in fragmentation of the EU AI regulatory landscape as national copyright management organisations in EU countries would propose their own rules and pricing. Spain and Germany, for example, have tried already. In an ideal world with a global copyright authority, with access to all AI training data inputs, model outputs and revenues, an AI model might solve this vast computational problem – assuming an agreement on how much AI revenue should be redistributed to training data producers (Martens, 2024). High transaction costs exist precisely because such a copyright authority does not exist and provide a justification to grant copyright exceptions (Posner, 2004). This could have been applied to AI training data.

EU policymakers found another way out of this conundrum. The CoP guidelines for reporting training data require only a summary list of the most relevant online domains from which training data was scraped. Small online publishers, for which transaction costs are more likely to exceed licensing fees, can be omitted. This twist is in line with the European Commission’s goal to “improve the copyright framework to address new challenges”. But it also creates new challenges.

It may result in biased training datasets – especially penalising for smaller EU language and cultural communities. Leading AI models are already biased in favour of large English-language communities and lack cultural diversity (Zhang et al, 2025). These CoP copyright provisions undermine the AI Act’s aim to reduce bias in AI models. 

Stretching the CoP beyond copyright law

However, this approach still leaves some issues unresolved. The CoP defines AI training data very broadly as all data used for pre-training, fine-tuning and reinforcement learning, regardless of whether the data is protected by intellectual property rights. This includes personal data protected by privacy rights, synthetic data generated by the model developer and data ‘distillated’ or extracted from other AI models. Models are not protected by copyright (Henderson and Lemley, 2025). Developers appear to be increasingly secretive about the production of synthetic training data, often extracted from other AI models, because it has become an important factor in their competitiveness. Again, a vague CoP formulation rescues developers: only a brief narrative of data sources is required. That gives considerable discretion to the AI Office, responsible for the implementation of the EU AI Act.

The rapid evolution of AI models is diminishing the importance of copyright-protected model training data. The latest generation of ‘reasoning’ models relies more on reinforcement learning with synthetic data, often extracted from other AI models. Copyright, however, remains important at the other end of the AI model lifecycle, beyond training, when models retrieve data in real-time from external sources to respond to user queries. However, this data is not covered because copyright provisions in the AI Act and CoP are limited to data used for AI training, not data collected after training.

Unequal treatment of training and post-training data led to another CoP provision that stretches beyond EU copyright law. The CoP instructs AI model developers to ensure that compliance with data mining opt-outs does not negatively affect the findability via search engines of these opted-out contents (Kim et al, 2025). In other words, data that is out of bounds for AI bots and models should still be collectable by search bots operated by the same company, to maintain the flow of engine traffic, online sales and advertising revenue to publishers’ websites.

This is intended to help web publishers push back against the fall in traffic to their sites induced by AI answer engines. However, it discriminates between machine learning and human learning. Machines will be able to retrieve less information than human users of search engines. This reduces the quality of AI answers and forces human users to invest more time and cognitive effort in constructing their own replies by clicking and reading relevant pages in search engines, rather than obtaining ready-made answers from AI models. That keeps human learning costs artificially high, compared to what they could be with a level playing field in data access between search and answer engines. Ultimately, this would have the consequence of reducing the efficiency of human learning and slowing down innovation in society.

Rapid convergence between search engines and AI answer technologies may soon make this CoP provision obsolete. Data collected by search bots and AI bots will converge on the same page. Google, Microsoft and OpenAI already offer search and answer services jointly. Web publishers and e-commerce platforms realise that the shift from search to answer is inevitable in the AI age, especially with the fast rise in AI agents, and are looking for other ways to generate revenue.

The EU’s AI copyright regime can only stand if AI models trained under more liberal regimes in other constituencies can be kept out of the EU market. For that purpose, the AI Act includes an extra-territoriality clause, a very contentious provision because copyright law is essentially territorial law (Quintais, 2025).

Countries with less-restrictive copyright regimes tend to do better in AI innovation (Peukert, 2025). Japan, for example, applies copyright to use of media data ‘for enjoyment’ by consumers, but allows a copyright exception for intermediate use for AI training. In US copyright law, transformative use of media content for purposes other than originally intended may constitute an acceptable exception to copyright protection. US courts seem to be gradually moving in that direction for AI training data (Martens, 2025), and President Trump certainly supports the trend.

In July, the US administration published its AI Action Plan (The White House, 2025). It recognises that US-China geopolitics and America’s wish to achieve global dominance is the main driver of the AI race. It wants to remove all barriers, including regulatory barriers, that stand in the way of achieving that goal. Should US courts accept AI models as transformative use of training data and grant an exception to copyright protection for these data, the EU would need to align with the US on copyright (the alternative in extremis would be for the EU to drop out of AI altogether). Developers would not re-train models with data acceptable in the EU market, reinforcing a situation in which hardly any leading AI models have been trained in the EU so far. 

Attempts to change EU copyright law may also backfire, potentially leading to even more detailed transparency requirements for all AI training inputs and a presumption in the absence of full transparency that relevant works have been used for AI training purposes, which could trigger infringement procedures.

Policy conclusions

Subtle weakening of copyright enforcement in the CoP, limiting it to the most relevant data used for AI training, has enabled most major AI developers to sign up to the CoP, except Meta and xAI. Signatories will have been comforted by vague formulations that leave considerable margin for interpretation by the AI Office. AI regulators are caught between EU copyright law and AI regime competition with other countries. Attempts to reform that law may take many years and might backfire. Muddling through may be the only feasible option in the short run.

A more satisfactory policy would require a debate on the role of AI in enhancing learning, research and innovation (Senftleben et al, 2025), and the conditions under which it can access data to feed that process. Copyright is the wrong starting point for that debate. It was originally designed to promote creativity and innovation. But in the AI era it has become a reactionary force that diminishes innovation potential, controlled by media industries that represent less than 4 percent of GDP. Staying on top in the global AI competition requires an efficient pro-innovation regime.

A better solution would be for copyright law, or at least it’s application to AI models, to take some inspiration from pro-innovation features in patent law. The innovative content of patents is publicly available and anyone can learn from it, build new innovation around or on top of it (Murray and Stern, 2007). But only the patent holder has the right to commercial reproduction of the original invention. That accelerates the spread and accumulation of knowledge growth and still protects original innovators.

Applying this approach to data for AI model training, and to post-training data retrieval, would permit AI models and users to learn from all legally accessible content and data. This could be achieved by widening the interpretation and application of the copyright protection exemption for digital data processing in Art 4§3 of the EU Copyright in the Digital Single Market Directive (Directive (EU) 2019/790). How close AI model outputs may come to the original is an AI model output question, not a data-inputs question. Moving the policy debate to permissible AI outputs would at least clear the way for unrestricted learning and transformative use on the inputs side.

References

Geiger, C. and V. Iaia (2024) ‘The Forgotten Creator: Towards a Statutory Remuneration Right for Machine Learning of Generative AI’, Computer Law & Security Review 52: 105925, available at https://doi.org/10.1016/j.clsr.2023.105925

Henderson, P. and M. Lemley (2025) ‘The Mirage of Artificial Intelligence Terms of Use Restrictions’, Princeton University Program in Law & Public Affairs Research Paper 2025-04, available at https://dx.doi.org/10.2139/ssrn.5049562

Kaplan, J., S. McCandlish, T. Henighan, T.B. Brown, B. Chess … D. Amodei (2020) ‘Scaling Laws for Neural Language Models’, mimeo, available at https://arxiv.org/abs/2001.08361

Kim, T., K. Bock, C. Luo, A. Liswood and E. Wenger (2025) ‘Scrapers selectively respect robots.txt directives: evidence from a large-scale empirical study’, mimeo, available at https://arxiv.org/abs/2505.21733

Longpre, S., R. Mahari, A. Lee, C. Lund, H. Oderinwale, W. Brannon and S. Pentland (2024) ‘Consent in Crisis: The Rapid Decline of the AI Data Commons’, mimeo, available at https://arxiv.org/abs/2407.14933

Martens, B. (2024) ‘Economic arguments in favour of reducing copyright protection for generative AI inputs and outputs’, Working Paper 09/2024, Bruegel, available at https://www.bruegel.org/system/files/2024-04/WP%2009%20040424%20Copyright%20final_0.pdf

Martens, B (2025) ‘The EU’s false sense of isolationism in AI and copyright’, Kluwer Copyright Blog, 29 May, available at https://legalblogs.wolterskluwer.com/copyright-blog/the-eus-false-sense-of-isolationism-in-ai-and-copyright/

Murray, F. and S. Stern (2007) ‘Do formal intellectual property rights hinder the free flow of scientific knowledge? An empirical test of the anti-commons hypothesis’, Journal of Economic Behavior & Organization 63(4): 648–687, available at https://doi.org/10.1016/j.jebo.2006.05.017

Peukert, C. (2025) ‘Copyright and the Dynamics of Innovation in Artificial Intelligence’, Proceedings of the 58th Hawaii International Conference on System Sciences, available at https://scholarspace.manoa.hawaii.edu/10.24251/HICSS.2025.538

Posner, R. (2004) ‘Transaction costs and antitrust concerns in the licensing of intellectual property’, John Marshall Review of Intellectual Property Law 325, available at https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=2876&context=journal_articles

Quintais, J. (2025) ‘Copyright, the AI Act and Extraterritoriality’, Policy Brief, The Lisbon Council, available at https://dx.doi.org/10.2139/ssrn.5316132

Senftleben, M., K. Szkalej, C. Sganga and T. Margoni (2025) ‘Towards a European Research Freedom Act: A Reform Agenda for Research Exceptions in the EU Copyright Acquis’, Institute for Information Law (IViR), University of Amsterdam, available at https://dx.doi.org/10.2139/ssrn.5130069

The White House (2025) Winning the Race, America’s AI Action Plan, available at https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

Zhang, L., S. Milli, K. Jusko, J. Smith, B. Amos, W. Bouaziz … M.Nickel (2025) ‘Cultivating Pluralism In Algorithmic Monoculture: The Community Alignment Dataset’, mimeo, available at https://arxiv.org/abs/2507.09650



Source link

Continue Reading

Trending