Tools & Platforms
What Happens When Teachers Run an AI Product Rollout?
Getting involved in his school district’s discussions about AI was a no-brainer for Ricardo Vela.
The veteran middle school history teacher doesn’t fit the common stereotypes of an educator who is enthusiastic about new technology — he’s not an early-career teacher fresh out of college, and he doesn’t teach coding full time or serve as the district’s top tech leader.
But that’s exactly why Vela is playing an important role in the rollout of generative AI in the White Plains City Schools, a 7,000-student district in New York.
The educator, who also helps other teachers integrate tech tools into their classrooms as a “computer lead teacher,” brings experience to the implementation process, as well as curiosity.
For teachers in school districts who jumped with both feet into exploring generative artificial intelligence, this year’s national ISTE+ASCD conference in San Antonio — where AI products were everywhere — marked another point in the evolution of the technology in schools.
About This Insider
Ricardo Vela is an 8th-grade social studies teacher at Highlands Middle School in White Plains, New York, where he has taught for nearly two decades. He specializes in supporting multilingual learners. As a computer lead teacher, Ricardo also serves as a bridge between district technology initiatives and classroom practice.
Early adopters’ experiences are creating a blueprint for what implementation will look like in schools and districts as more systems purchase AI-powered products and tools. Their observations also offer insight into the common hurdles educators will face in bringing a fast-adapting technology into the classroom.
EdWeek Market Brief Staff Writer Emma Kate Fittes sat down with Vela during the ISTE+ASCD event last week to talk about how his district is using AI, what their implementation of AI tools looks like, and how he’s helping the system navigate hurdles — including pushback from some educators.
The following has been edited for length and clarity.
How did you become involved in your district’s discussions about AI tools?
I’m what’s called a “computer lead teacher.” I help teachers integrate technology into their classrooms, and [I’m] also a member of the district technology team.
The year before [last,] we started having conversations related to AI. And we started looking at what tools we wanted to implement in our district.
Members of the team tried out different tools, and it really came down to two for us: MagicSchool AI and SchoolAI. We went with SchoolAI.
This year, we’ve really been building out our AI guidelines for next year.
What prompted the conversation about AI in your district — was it looking more at student-focused or teacher-focused tools?
There was an acknowledgement that kids were using AI without guidance, and they were using it to cheat — let’s be honest.
We realized that we need to get ahead of it, because it’s coming, it’s not going away. It’s not like people can stop using AI, and we would be better off as a district having guidelines and really beginning to teach kids how to use it appropriately. We didn’t shy away from it.
A lot of districts have said, “We’re not touching this yet. We’re not sure where to go with it.” We’re like, “Let’s tackle it.”
It’s better to tackle it now and be proactive about dealing with any issues that might come up.
How did you first start using AI as a teacher?
I really started just by using SchoolAI [and ChatGPT] to write my lessons — not my lesson plans, but my worksheets — and come up with activity ideas.
This year, it has just grown. I use ChatGPT and SchoolAI all the time to write my lessons, to write my units, to design activities, worksheets, and scaffolds for my multi-language learners — everything I can possibly think of. Whenever I have a problem, I ask [AI].
How did your team go about rolling AI out to other teachers and schools?
In each of the secondary schools, we picked several teachers to pilot SchoolAI. Those teachers got access to use it all this year to see if it was really going to be a good fit.
I was able to grade 100 essays in two days, as opposed to two weeks. And that’s huge.
And it certainly has been a good fit. It’s been a wonderful tool to use to lighten my workload, [and] really to just bring learning alive for students.
[The pilot] hasn’t just been a few tech-savvy teachers … but across multiple disciplines: me in social studies, a science teacher, an [English/language arts] teacher.
Have you faced any hesitation or pushback from other teachers or administrators?
My principal has been very open to training teachers in using AI. Teachers are, for the most part, pretty receptive towards it. There are a couple of [people who are wary of new tech], but there’s always going to be [those folks] in every school.
We have a lot of young staff. Because of that, we have a lovely openness and sort of willingness to explore this and see how it can be useful.
What are the next steps of the rollout of SchoolAI at your district?
The rollout is going to be slow. When we first became a 1-to-1 school and everybody got iPads, teachers became totally overwhelmed.
“Here’s this app and this app and this app and this app” — it was way too much.
We made a strategic decision between the principals and the team to do [the rollout] piecemeal. The first group of people to have [full] access to SchoolAI are going to be the ELA and social studies teachers, because that seems to be where it lends itself [as being] the easiest to use.
Then, the second half of the school year, we’re going to bring in the other disciplines.
What about social studies and English/language arts lends itself well to the use of AI tools?
We can still be language-based. There’s a lot of speaking and writing that goes into it, and a lot of creativity. It’s a little harder with math, and a math teacher is a little more hesitant. Math teachers have a hard time envisioning how [to] do linear formulas on a text-based application.
As a teacher, why is it important for you to be involved in this decision-making process for your district?
I’m extraordinarily lucky to work at White Plains. The district committee [that reviews technology consists of] teachers, IT, and teaching assistants who might be computer specialists.
It’s really all teachers, and then the head of instructional educational technology.
What we decide as a team is what [the top ed tech administrator] then brings back to the board and says, ‘This is what we’re going to do.’
Can you give me an example of how you use AI in your classroom now?
I created a bot to help students write an essay. I really created five bots to help them with the different parts of the essay, and it was really cool.
I had one that would help you write the introduction. It wouldn’t write it for you, it just asks you guiding questions to get you thinking about the essay topic and give you help that way.
We have a lot of young staff. And I think because of that, we have a lovely openness and sort of willingness to explore this and see how it can be useful.
Because what happens when you start [a class on] writing an essay, if you have 25 kids, 12 of them don’t know how to start. I created this because I can’t help 12 students at once. If I’m trying to do that with one, 11 kids were doing nothing.
With this, the bot will help you if you get stuck, then call me over. What was amazing about it is that I was able to sit back in my half-moon desk and have one-on-one conferences with each kid, and not just talking about [their essay], but [asking] “How are you feeling, and how are you feeling about this process? How’s your day going?”
In that way, it really allows me to build relationships, because I have an assistant that never gets tired, who can help everybody, allowing me to connect with students.
What kind of response did you get from students?
As I was meeting with those kids, one-on-one, I asked them “Are you using it? Is it helpful?” [and] they’re like, “Oh, yeah, it was helpful.”
I presented at our regional [meeting of school districts] on some of the stuff that I’ve been doing with AI, and specifically with SchoolAI, and we brought students.
Afterwards, the teachers, administrators, and superintendents who were there got to interview the students and ask them about their experiences.
It was all very positive. They talked not just about how they used it in my class, but how they used it in other classes to help them solve problems that they’re having or tutor them.
A major concern with AI is that it can be inaccurate or biased in its responses. How do you handle that in your classroom?
That’s something that I’ve given a lot of thought to this year.
Here’s what I’m going to do next year: My whole first week is going to include lessons on the appropriate use of AI.
The three things that I’m going to tell students is, No. 1, that it’s guessing your answer based on everything it was trained on. It looks for trends.
No. 2 is that because it’s just guessing, it can be wrong. The third is that it’s biased because it trained on certain things and not others.
That is a starting point. Then [comes] teaching them, “How do you fact-check that [output]?”
So how do you fact-check?
There are different ways you can do that. You can have [students] find other sources to fact-check what ChatGPT, for example, said. Or you can even ask ChatGPT itself, “Hey, is that really true? Is that accurate?”
Just teaching the kids those skills is really the way I think you can combat that [concern].
What other common challenges have you navigated through in exploring and adopting AI?
When I first started using AI, I was like, “Oh, here’s a tool. It’s gonna give me a worksheet and I’ll prompt it.”
And it wasn’t perfect.
That really took a shift in mindset on my part to say, “Wait a minute, when you use an AI tool, it’s not going to give you something perfect.”
It doesn’t matter what tool you use, you have to prompt and re-prompt and re-prompt, and that way you fine-tune everything.
Join Us In Person at the EdWeek Market Brief Fall Summit
Education company officials and others trying to figure out what’s coming next in the K-12 market should join our in-person summit, Nov. 3-5 in Denver. You’ll hear from school district leaders on their biggest needs, and get access to original data, hands-on interactive workshops, and peer-to-peer networking.
What’s the most helpful use case for AI you’ve run into so far?
Grading. I uploaded the assignment. I uploaded my documents explaining what needs to be included in each of the body paragraphs of the introduction, and I uploaded the rubric.
I uploaded [the essays] into the chatbot, and I said … “Give me a grade based on the rubric, [and] give me a list of things that they did well, written in my voice, that I can copy and paste with areas where they could use improvement.”
All I had to do was skim [each] essay [and] look at the grade. Did I agree with it? [I] look at the feedback. Is it good? Then [I] tweak the grade.
I was able to grade 100 essays in two days, as opposed to two weeks. That’s huge.
Tools & Platforms
AI technology drives sharp rise in synthetic abuse material
New data reveals over 1,200 AI-generated abuse videos have been discovered so far in 2025, a significant rise from just two during the same period last year.
AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.
According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.
Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.
More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.
Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.
Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.
IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.
The IWF encourages the public to report concerning material and share the exact web page where it is located.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tools & Platforms
AI on the line: How AI is transforming vision inspection technologies
In an era of tightening global regulations and rising consumer expectations, the F&B industry is increasingly turning to advanced vision inspection technologies. From spotting defects to ensuring compliance, these automated inspection tools are reshaping quality control, enhancing efficiency, reducing waste and boosting safety. FoodBev’s Siân Yates explores how cutting-edge technology is reshaping the industry, one perfectly inspected product at a time.
In the food and beverage industry, traditional quality inspection methods have always relied on human observation – an inherently inconsistent and flawed process. Automated vision inspection systems offer a transformative alternative. By detecting foreign objects, assessing product uniformity and ensuring that only items meeting strict quality criteria reach consumers, these systems significantly enhance operational efficiency and minimise errors.
“As the food industry moves towards more automation, applications are becoming increasingly complex, largely due to the variability in food products,” said Anthony Romeo, product manager at US-based vision solutions company Oxipital AI. This complexity stems from the need for automated systems to adapt to the wide range of textures, sizes and ingredients in food, making precise automation a key challenge.
Stephan Pottel, director of strategy at Zebra Technologies, highlighted the rising demand for intelligent automation: “There’s a growing need for machine vision and 3D solutions, powered by deep learning, to address more complex food and packaging use cases, along with vision-guided robotics for tasks like inspection, conveyor belt picking and sortation workflows”.
Key features of vision inspection
1. Defect detection
Vision inspection systems excel in identifying defects that may go unnoticed by human inspectors. These systems utilise high-resolution cameras and advanced algorithms to detect foreign objects, surface defects, and inconsistencies in size and shape. For example, in the fruit packing industry, vision systems can identify bruised or rotten fruit, ensuring only high-quality products are packaged and shipped.
2. Label verification
These technologies are increasingly used for label verification, ensuring compliance with regulatory standards. Systems can check for correct placement, legibility and adherence to labelling requirements, such as allergen information and expiration dates. Vision is usually deployed for label verification, rather than food surface defects, enhancing compliance and reducing the risk of costly recalls.
3. Product uniformity assessment
Maintaining product uniformity is crucial in the food and beverage sector. Vision inspection systems can assess visual aspects such as size, shape and colour. For instance, a snack manufacturer might use vision inspection to ensure that chips are uniformly shaped and coloured, meeting consumer expectations for quality and appearance.
4. Adaptive manufacturing
Advanced vision systems, particularly those incorporating AI and 3D technology, enable adaptive manufacturing processes. These systems can adjust production parameters in real time based on the visual data they collect. For example, in a bakery, vision systems can monitor the size and shape of pastries as they are produced, allowing adjustments to baking times or temperatures to ensure consistent quality.
Advancements in AI
Recent advancements in AI, automation and 3D technology have greatly enhanced machine vision systems, increasing accuracy and providing realistic visual sensing capabilities. 3D imaging technologies are being used to assess the shape and size of products, ensuring they meet packaging specifications. For instance, in the seafood industry, 3D scanners can evaluate the dimensions of fish fillets, ensuring they are cut to the correct size before packaging. This not only reduces waste but also ensures consistency in product offerings.
What is more, 3D profile sensors improve depth perception and refine quality control, making them indispensable tools in industrial automation. Oxipital AI’s Romeo highlighted the potential of these technologies: “Removing defects before they reach customers is a key first step where vision inspection technology plays a role, but there’s even more data to be leveraged”. By preventing defects from the outset, manufacturers can boost yield and reduce waste.
AI-powered vision inspection systems can also facilitate real-time monitoring of production lines, identifying potential issues before they escalate. This capability allows manufacturers to implement predictive maintenance, reducing downtime and improving overall efficiency.
AI and food safety
Consumer safety remains a top priority in the food and beverage industry. AI plays a crucial role in monitoring and analysing processes in real time, helping manufacturers navigate the complexities of compliance with legal requirements and certification pressures from major retailers.
As Zebra Technologies’ Pottel explained: “AI is ideal for food and beverage products where classification, segmentation, and object and anomaly detection are essential. It is also enhancing asset and inventory visibility, which is crucial for predicting contamination risks and maintaining high safety standards throughout the supply chain.”
“Vision technologies can help check the presentation of food products…offering a quick, repeatable and reliable way to assess the visual aspects of food products like size, shape and colour,” added Neil Gruettner, market manager at Mettler-Toledo Product Inspection.
He continued: “Deployment of this type of AI provides context to support rule-based machine learning and improve human decision-making. It also gives inspection equipment the tools to extract and interpret as much data as possible out of a product, facilitating the evolution and refinement of production processes through the continuous exposure to vast datasets.”
AI-enhanced vision systems also guide robots in handling food products, particularly those that are delicate or irregularly shaped. “AI has proved to be a great method for tackling applications with a high frequency of naturally occurring organic variability, such as food,”Oxipital AI’s Romeo explained, adding that this adaptability ensures gentle and precise handling, particularly important when sorting fresh produce or packaging baked goods.
Fortress Technology uses AI to reduce contamination risks and identify defects. The company’s commercial manager, Jodie Curry, told FoodBev: “Streamlining processes reduces the risk of contamination and ensures consistent quality. Implementing automated technology and digital tools helps identify inefficiencies and boosts responsiveness.”
The role of combination inspection systems
The integration of multiple inspection technologies into single systems is another key trend in this space. These systems integrate various inspection technologies, such as X-ray, checkweighing and vision inspection, to provide a comprehensive assessment of food products. By combining these technologies, manufacturers can ensure higher quality control, better detection of defects and more efficient production lines. This trend allows for more accurate and reliable monitoring, helping to reduce waste, improve safety standards and enhance overall product quality.
For its part, Fortress offers combination systems that enable comprehensive and multi-layered inspection. The company is already leveraging its proprietary data software package, Contact 4.0, across its metal detection, X-ray and checkweighing technologies. Contact 4.0 allows processors to review and collect data, securely monitor and oversee the performance of multiple Fortress metal detectors, checkweighers or combination inspection machines connected to the same network.
Deep learning and quality control
Deep learning is revolutionising visual inspection by enabling machines to learn from data and recognise previously unseen variations of defect As Zebra Technologies’ Pottel explained: “Deep learning machine vision excels at complex visual inspections, especially where the range of anomalies, defects and spoilage can vary, as is often the case with food.
This technology is vital for automating inspections and ensuring quality. Deep learning optical character recognition (OCR) also improves packaging inspection by ensuring label quality, regulatory compliance and brand protection. It can verify label presence, confirm allergen accuracy and prevent mislabeling.
“The goal is to strengthen quality control by capturing an image and processing it against set quality control parameters,” Mettler-Toledo’s Gruettner pointed out.
Vision systems are increasingly deployed for label verification, ensuring compliance with legislative food labelling requirements. The Mettler-Toledo label inspection portfolio features Smart Camera systems (V11, V13, V15) for basic label inspections, including barcodes, alphanumeric text and label quality. For more advanced applications, the PC based V31 and V33 systems offer a larger field of view, faster throughput and enhanced inspection capabilities.
Oxipital AI uses 3D product scans and synthetic data generation to eliminate the need for hand-labelling images. “All training is done at Oxipital AI, enabling food and beverage customers to deploy AI without needing a team of experts,” said Romeo. “Our solutions are designed for immediate impact, requiring no coding, DIY or machine-learning expertise to implement and maintain.”
Real-world applications and future prospects
According to Zebra’s Global Manufacturing Vision Study, which surveyed leaders across various manufacturing sectors, including F&B, 66% of respondents plan to implement machine vision within the next five years, while 54% expect AI to drive growth by 2029.
These figures, coupled with the expanding market for vision inspection systems, suggest
that the majority of manufacturing leaders are prioritising the integration of these advanced technologies, seeing them as crucial tools for both immediate improvements and long-term growth.
This shift is partly driven by increasingly stringent government regulations, which demand more accurate labelling and packaging. Many companies are already successfully leveraging AI to enhance their operations, particularly in labelling processes.
Despite its clear advantages, the uptake of AI has been slow. The main barrier appears to be cost. While the initial integration can be expensive, AI has demonstrated significant long-term cost savings, making it a worthwhile investment over time.
Zebra’s studies have shown that the pressure to maintain quality while managing fewer resources is intensifying for manufacturers. As a result, cost remains a significant consideration when implementing AI solutions.
Fortress recommends consolidating AI systems into a single interface, which helps reduce costs in the long term. Curry told FoodBev: “The future of our food supply chain depends on advanced inspection systems that enhance food safety, reduce product waste and require minimal factory floor space”.
She continued: “Combination systems offer the benefit of space efficiency, as all sales, services, parts and technical support are handled by one provider. A single interface simplifies training, improves operational safety and drives cost savings through faster installation and reduced training time.”
As AI continues to evolve, its role in vision and inspection is set to expand. Advancements in machine learning, sensor technology and robotics will lead to even more sophisticated and efficient inspection systems, raising quality and safety standards for consumers worldwide.
Tools & Platforms
AI Flow by TeleAI Recognized as a Breakthrough Framework for AI Deployment and Distribution by Omdia
SHANGHAI, July 11, 2025 /PRNewswire/ — AI Flow, the innovative framework developed by TeleAI, the Institute of Artificial Intelligence of China Telecom, has been recognized as a key role in the intelligent transformation of telecom infrastructure and services in the latest report by Omdia, a premier technology research and advisory firm. The report highlights AI Flow’s exceptional capabilities in addressing the edge GenAI implementation challenges, showcasing its device-edge-cloud computing architecture that optimizes both performance and efficiency as well as its groundbreaking combination of information and communication technologies.
According to the report, AI Flow facilitates seamless intelligence flow, allowing device-level agents to overcome the limitations of a single device and achieve enhanced functionality. The same communication network can connect advanced LLMs, VLMs, and diffusion models across heterogeneous nodes. By facilitating real-time, synergistic integration and dynamic interaction among these models, the approach achieves emergent intelligence that exceeds the capabilities of any individual model.
Lian Jye Su, Chief Analyst at Omdia, remarked that AI Flow has demonstrated sophisticated approaches to facilitate efficient collaboration across device-edge-cloud tiers and to achieve emergent intelligence through connective and interactive model operations.
The unveiling of AI Flow has also drawn great attention from the AI community on global social media. AI industry observer EyeingAI said on X “It’s a grounded, realistic take on where AI could be headed. ” AI tech influencer Parul Gautam said on X that AI Flow is pushing AI boundaries and ready to shape the future of intelligent connectivity.
Fulfill the Vision of Ubiquitous Intelligence in Future Communication Networks
AI Flow, under the leadership of Professer Xuelong Li, the CTO and Chief Scientist of China Telecom and Director of TeleAI, is introduced to address the significant challenges of the deployment of emerging AI applications posed by hardware resource limitations and communication network constraints, enhancing the scalability, responsiveness, and sustainability of real world AI systems. It is a multidisciplinary framework designed to enable seamless transmission and emergence of intelligence across hierarchical network architectures by leveraging inter-agent connections and human-agent interactions. At its core, AI Flow emphasizes three key points:
Device-Edge-Cloud Collaboration: AI Flow leverages a unified device-edge-cloud architecture, integrating end devices, edge servers, and cloud clusters, to dynamically optimize scalability and enable low-latency inference of AI models. By developing efficient collaboration paradigms tailored for the hierarchical network architecture, the system minimizes communication bottlenecks and streamlines inference execution.
Familial Models: Familial models refer to a set of multi-scale architectures designed to address diverse tasks and resource constraints within the AI Flow framework. These models facilitate seamless knowledge transfer and collaborative intelligence across the system through their interconnected capabilities. Notably, the familial models are feature-aligned, which allows efficient information sharing without the need for additional middleware. Furthermore, through well-structured collaborative design, deploying familial models over the hierarchical network can achieve enhanced inference efficiency under constrained communication bandwidth and computational resources.
Connectivity- and Interaction-based Intelligence Emergence: AI Flow introduces a paradigm shift to facilitate collaborations among advanced AI models, e.g., LLMs, vision-language models (VLMs), and diffusion models, thereby stimulating emergent intelligence surpassing the capability of any single model. In this framework, the synergistic integration of efficient collaboration and dynamic interaction among models becomes a key boost to the capabilities of AI models.
See AI Flow’s tech articles here:
https://www.arxiv.org/abs/2506.12479
https://ieeexplore.ieee.org/document/10884554
AI Flow’s First Move: AI-Flow-Ruyi Familial Model
Notably, TeleAI has just open-sourced the first version of AI Flow’s familial model: AI-Flow-Ruyi-7B-Preview last week on GitHhub.
The model is designed for the next-generation device-edge-cloud model service architecture. Its core innovation lies in the shared intermediate features across models of varying scales, enabling the system to generate response with a subset of parameters based on problem complexity through an early-exit mechanism. Each branch can operate independently while leveraging their shared stem network for computation reduction and seamless switching. Combined with distributed device-edge-cloud deployment, it achieves collaborative inference among large and small models within the family, enhancing the efficiency of distributed model inference.
Open-source address:
https://github.com/TeleAI-AI-Flow/AI-Flow-Ruyi
About TeleAI
TeleAI, the Institute of Artificial Intelligence of China Telecom, is a pioneering team of AI scientists and enthusiasts, working to create breakthrough AI technologies that could build up the next generation of ubiquitous intelligence and improve people’s wellbeing. Under the leadership of Professor Xuelong Li, the CTO and Chief Scientist of China Telecom, TeleAI aims to continuously expand the limits of human cognition and activities, by expediting research on AI governance, AI Flow, Intelligent Optoelectronics (with an emphasis on embodied AI), and AI Agents.
For more information:
https://www.teleai.com.cn/product/AboutTeleAI
Photo – https://mma.prnewswire.com/media/2729356/AI_Flow.jpg
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas