Connect with us

Tools & Platforms

New AI guidance for teachers in Mass – NBC Boston

Published

on


Artificial intelligence in classrooms is no longer a distant prospect, and Massachusetts education officials on Monday released statewide guidance urging schools to use the technology thoughtfully, with an emphasis on equity, transparency, academic integrity and human oversight.

“AI already surrounds young people. It is baked into the devices and apps they use, and is increasingly used in nearly every system they will encounter in their lives, from health care to banking,” the Department of Elementary and Secondary Education’s new AI Literacy Module for Educators says. “Knowledge of how these systems operate—and how they may serve or undermine individuals’ and society’s goals—helps bridge classroom learning with the decisions they will face outside school.”

The Department of Elementary and Secondary Education released the learning module for educators, as well as a new Generative AI Police Guidance document on Monday ahead of the 2025-2026 school year, a formal attempt to set parameters around the technology that has infiltrated education.

Both were developed in response to recommendations from a statewide AI Task Force and are meant to give schools a consistent framework for deciding when, how and why to use AI in ways that are safe, ethical and instructionally meaningful, according to a DESE spokesperson.

Artificial intelligence promises ways to change our lives, but a new watchdog group report found that it can also tell teenagers to do things like get drunk or high, go on a crash diet or compose a suicide note. Political commentator Sue O’Connell discusses how the technology is regulating AI for safety

The department stressed that the guidance is “not to promote or discourage the use of AI. Instead, it offers essential guidance to help educators think critically about AI — and to decide if, when, and how it might fit into their professional practice.”

The learning module for educators itself notes that it was written with the help of generative AI.

The first draft was intentionally written without AI. A disclosure says “the authors wanted this resource to reflect the best thinking of experts from DESE’s AI task force, from DESE, and from other educators who supported this work. When AI models create first drafts, we may unconsciously ‘anchor’ on AI’s outputs and limit our own critical thinking and creativity; for this resource about AI, that was a possibility the authors wanted to avoid.” However, the close-to-final draft was entered into a large language model like ChatGPT-4o or Claude Sonnet 4 “to check that the text was accessible and jargon-free,” it says.

In Massachusetts classrooms, AI use has already started to spread. Teachers are experimenting with ChatGPT and other tools to generate rubrics, lesson plans, and instructional materials, and students are using it to draft essays, brainstorm ideas, or translate text for multilingual learners. Beyond teaching, districts are also using AI for scheduling, resource allocation and adaptive assessments.

Mike Proulx discusses the latest updates for OpenAI’s latest version of ChatGPT

But the state’s new resources caution that AI is far from a neutral tool, and questions swirl around whether AI can be used to enhance learning, or short-cut it.

“Because AI is designed to mimic patterns, not to ‘tell the truth,’ it can produce responses that are grammatically correct and that sound convincing, but are factually wrong or contrary to humans’ understanding of reality,” the guidance says.

In what it calls “AI fictions,” the department warns against over-reliance on systems that can fabricate information, reinforce user assumptions through “sycophancy,” and create what MIT researchers have described as “cognitive debt,” where people become anchored to machine-generated drafts and lose the ability to develop their own ideas.

The guidance urges schools to prioritize five guiding values when adopting AI tools: data privacy and security, transparency and accountability, bias awareness and mitigation, human oversight and educator judgment, and academic integrity.

On privacy, the department recommends that districts only approve AI tools vetted through a formal data privacy agreement process and teach students how their data is used when they interact with such systems. For transparency, schools are encouraged to inform parents about classroom AI use, maintain public lists of approved tools, and describe how each is used.

Bias is another central concern. The guidance suggests generative AI tools have built-in harmful biases, as they are trained on human data, and that teachers and students should examine how AI responses may vary.

“When AI systems go unexamined, they can inadvertently reinforce historical patterns of exclusion, misrepresentation, or injustice,” the department wrote.

This job market can be tough, especially for young people trying to break into it — and the tech used to find and land a job now looks totally different from before. So how do parents help their kids? Hirevue CIO Nathan Mondragon explains.

Officials warn that predictive analytics forecasting a student’s future outcome could incorrectly flag them for academic intervention, based on biased AI interpretation of data.

“Automated grading tools may penalize linguistic differences. Hiring platforms might down-rank candidates whose experiences or even names differ from dominant norms. At the same time, students across the Commonwealth face real disparities in access to high-speed internet, up-to-date devices, and inclusive learning environments,” the guidance says.

The document also places responsibility on educators to oversee and adjust AI outputs. For example, teachers might use AI to draft a personalized reading plan but still adapt it to reflect a student’s individual interests, such as sports or graphic novels.

For students, the state is moving away from a tone of outright prohibition of AI, and towards one of disclosure for the sake of academic integrity.

The documents suggest that schools could come up with policies for students to include an “AI Used” section in their papers, clarifying how and when they used tools, while teachers teach the distinction between AI-assisted brainstorming and AI-written content.

“Schools teach and encourage thoughtful integration of AI rather than penalizing use outright… AI is used in ways that reinforce learning, not short-circuit it. Clear expectations guide when and how students use AI tools, with an emphasis on originality, transparency, and reflection,” it says.

Beyond classroom rules, it emphasizes that “AI literacy” — not only the technical knowledge, but understanding and evaluating the responsible use of these tools — as an important job and civic skill.

“Students need to be empowered not just as users, but as informed, critical thinkers who understand how AI works, how it can mislead, and how to assess its impacts,” the guidance says.

That literacy extends to the personal and environmental costs of technology. Students, the department suggests, should reflect on their digital footprints and data permanence while also considering environmental impacts of AI like energy use and e-waste.

The new resources emphasize that “teaching with AI is not about replacing educators—it’s about empowering them to facilitate rich, human-centered learning experiences in AI-enhanced environments.”

The classroom guidance arrives as Gov. Maura Healey has taken a prominent role in shaping Massachusetts’ AI landscape. Last year she launched the state’s AI Hub, calling it a bid to make Massachusetts a leader in both developing and regulating artificial intelligence. Healey has promoted an all-in approach to integrating AI across sectors, highlighting its potential for economic development.

Education officials positioned their new resources as part of that broader statewide strategy.

“Over the coming years, schools will play a critical role in supporting students who will be graduating into this ecosystem by providing equitable opportunities for them to learn about the safe and effective use of AI,” it says.

The documents acknowledge that AI is already embedded in many of the tools students and teachers use daily. The challenge, they suggest, is not whether schools will use AI but how they will shape its role.

The release also comes against the backdrop of a push on Beacon Hill to limit technology in classrooms.

The Senate this summer approved a bill that would prohibit student cellphone use in schools starting in the 2026-2027 academic year, reflecting growing concern that constant device access hampers focus and learning. Lawmakers backing the measure have likened cellphones in classrooms to “electronic cocaine” and “a youth behavioral health crisis on steroids.”

The House has not said when it plans to take up the measure, or even when representatives will return for serious lawmaking, a timetable that now appears likely to fall after the new school year begins. That uncertainty leaves schools in a period of flux, weighing how to integrate emerging AI tools even as lawmakers consider pulling back on other forms of student technology use.



Source link

Tools & Platforms

China’s top social media platforms take steps to comply with new AI content labeling rules

Published

on


China’s top social media platforms, including ByteDance Ltd.’s TikTok clone Douying and Tencent Holdings’ WeChat, rolled out new features today to try to comply with a new law that mandates all artificial intelligence content is clearly labeled as such.

The new content labeling rules mandate that all AI-generated content posted on social media is tagged with explicit markings visible to users. It applies to AI-generated text, images, videos and audio, and also requires that implicit identifiers, such as digital watermarks, are embedded in the content’s metadata.

The law, which was first announced in March by the Cyberspace Administration of China, reflects Beijing’s increased scrutiny of AI at a time when concerns are rising about misinformation, online fraud and copyright infringement.

According to a report in the South China Morning Post, the law comes amid a broader push by Chinese authorities to increase oversight of AI, as illustrated by the CAC’s 2025 Qinglang campaign, which aims to clean up the Chinese language internet.

WeChat, one of the most popular messaging platforms in China, which boasts more than 1.4 billion monthly active users globally, has said that all creators using its platform must voluntarily declare any AI-generated content they publish. It’s also reminding users to “exercise their own judgement” for any content that has not been flagged as AI generated.

In a post today, WeChat said it “strictly prohibits” any attempts to delete, tamper with, forge or conceal AI labels added by its own automated tools, which are designed to pick up any AI-generated content that’s not flagged by users who upload it. It also reminded users against using AI to spread false information or for any other “illegal activities.”

Meanwhile Douyin, which has around 766 million monthly active users, said in a post today that it’s encouraging users to add clear labels to every AI-generated video they upload to its platform. It will also attempt to flag AI-generated content that isn’t flagged by users by checking its source via its metadata.

Several other popular social media platforms made similar announcements. For instance, the microblogging site Weibo, often known as China’s Twitter, said on Friday it’s adding tools for users to tag their own content, as well as a button for users to report “unlabeled AI content” posted by others.

RedNote, the e-commerce-based social media platform, issued its own statement on Friday, saying that it reserves the right to add explicit and implicit identifiers to any unidentified AI-generated content it detects on its platform.

Many of China’s best known AI tools are also moving to comply with the new law. For instance, Tencent’s AI chatbot Yuanbao said on Sunday it has created a new labeling system for any content it generates on behalf of users, adding explicit and implicit tags to text, videos and images. In its statement, it also advised users that they should not attempt to remove the labels it automatically adds to the content it creates.

When the CAC announced the law earlier this year, it said its main objectives were to implement robust AI content monitoring, enforce mandatory labeling and apply penalties to anyone who disseminates misinformation through AI or uses the technology to manipulate public opinion. It also pledged to crack down on deceptive marketing that uses AI, and strengthen online protections for underage users.

The European Union is set to implement its own AI content labeling requirements in August 2026, as part of the EU AI Act, which mandates that any content “significantly generated” by AI must be labeled to ensure transparency. The U.S. has not yet mandated AI content labels, but a number of social media platforms, such as Meta Platforms Inc., are implementing their own policies regarding the tagging of AI-generated media.

Photo: WeChat

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Source link

Continue Reading

Tools & Platforms

Surge in Alibabas Volume Amid Tech Shifts and AI Investments

Published

on


1. Nvidia (Nasdaq: NVDA)
Nvidia dropped solidly by -3.32%, with the trading volume of 42.33B. UAE AI company G42 is seeking to diversify its chip supply beyond Nvidia, including negotiations with tech giants like Amazon AWS, Google, Meta, Microsoft, and xAI for its planned AI park. Google is reportedly leading in these discussions.

2. Tesla (Nasdaq: TSLA)
Tesla dropped solidly by -3.50%, with the trading volume of 27.32B. Tesla’s CEO Elon Musk states that 80% of Tesla’s value will depend on the Optimus robot. Despite challenges in Europe, including executive resistance and competition, Tesla lowered Model 3 prices in China, marking its long-range version’s debut with a price cut.

3. Alibaba Group Holding Limited (NYSE: BABA)
Alibaba Group Holding Limited surged by 12.90%, with the trading volume of 10.94B. Alibaba plans to invest over 380 billion yuan in the next three years to boost its computing power industry, impacting domestic AI infrastructure. Its Q1 FY 2026 financial report showed a 10% revenue growth and a 76% net profit increase, exceeding expectations.

4. Microsoft (Nasdaq: MSFT)
Microsoft dipped mildly by -0.58%, with the trading volume of 10.63B. UAE AI company G42 is diversifying chip supplies to reduce dependency on Nvidia, engaging with tech giants like Amazon AWS, Google, Meta, Microsoft, and Elon Musk’s xAI for a planned AI park, with Google’s negotiations being the most advanced.

5. Apple (Nasdaq: AAPL)
Apple dipped mildly by -0.18%, with the trading volume of 9.16B. Apple is expanding its retail footprint in India with a new store, Apple Hebbal, set to open in Bangalore on September 2. This follows the openings of Apple BKC in Mumbai and Apple Saket in Delhi. Apple also plans to remove physical SIM card slots in more countries for the iPhone 17 series.

6. Alphabet (Nasdaq: GOOGL)
Alphabet gained mildly by 0.60%, with the trading volume of 8.44B. UAE’s AI company G42 is seeking to diversify its chip suppliers to reduce reliance on Nvidia. They are negotiating with major tech companies including Amazon AWS, Google, Meta, Microsoft, and Elon Musk’s xAI, with Google likely to sign a computing power procurement deal soon.

7. Palantir Technologies (NYSE: PLTR)
Palantir Technologies dipped mildly by -0.89%, with the trading volume of 7.27B. South Korean retail investors showed significant interest in Palantir Technologies, with substantial net purchases over the past week.

8. Meta Platforms (Nasdaq: META)
Meta Platforms dipped mildly by -1.65%, with the trading volume of 6.70B. Meta and Scale AI’s partnership faced challenges as major investment leads to strained relations and data quality concerns. Additionally, Meta plans to release a smart glasses SDK, diverging from trends by opting for LCoS over Micro LED technology.

9. Broadcom (Nasdaq: AVGO)
Broadcom dropped solidly by -3.65%, with the trading volume of 6.42B. Broadcom (AVGO.US) is expected to report a 21% revenue increase to $15.82 billion for Q3, with EPS projected at $1.66. Oppenheimer reaffirmed its “outperform” rating, raising the target price to $325. The AI business could exceed $5 billion in revenue.

10. Marvell Technology (Nasdaq: MRVL)
Marvell Technology plunged by -18.60%, with the trading volume of 6.19B. Company XYZ announced plans for global expansion, focusing on emerging markets and sustainable initiatives. New partnerships aim to enhance technological capabilities, while leadership emphasizes innovation and growth potential.



Source link

Continue Reading

Tools & Platforms

Americans Embrace AI Tech In Their Cars But Some Features Drive Them Crazy

Published

on


A new JD Pwoer study reveals which AI features drivers actually love and which ones are still frustratingly confusing

                                        https://www.carscoops.com/author/bradcarscoops-com/                                    

by Brad Anderson

31 minutes ago

 Americans Embrace AI Tech In Their Cars But Some Features Drive Them Crazy

  • Owners are proving particularly receptive to smart climate control systems.
  • Genesis took out top honors for innovation for the fifth consecutive year.
  • There is also growing demand from buyers for in-car payment systems.

Artificial intelligence has been steadily weaving its way into everyday life, from the phones in our pockets to the services we rely on daily. The auto industry has been no exception, and AI-driven features are now shaping how people interact with their cars.

Α new study from J.D. Power has found that while some of these features are being well-received by consumers, there are many others that need work before they actually start adding to the ownership experience.

Read: JLR Is Now Using The AI Damage Scanners That Hertz Customers Hate

As part of an expansion of its annual U.S. Tech Experience Index (TXI) Study, J.D. Power looked at seven AI-based technologies that should, in theory, enhance the driving experience. Among them, one of the clear successes is smart climate control, which automatically manages heating, ventilation, and air conditioning to balance comfort and efficiency.

Smarter Comfort in Action

The study found that owners using these systems are now reporting 6.3 fewer problems per 100 vehicles (PP100) than before, a meaningful improvement. These systems also provide a much-needed workaround for the growing number of cars that have moved climate settings into touchscreen menus instead of physical buttons. J.D. Power’s broader studies back this up, noting that smart climate controls are now boosting both vehicle quality scores and customer satisfaction overall.

Other AI-based systems are also showing promise, such as smart ignition and driver preference modes. In-vehicle shopping and payment systems also drew attention, with 62 percent of owners expressing interest. So far, the most common uses are paying for fuel, tolls, parking, or EV charging, but past designs have struggled with clunky menus and limited apps.

According to the study, the next generation could succeed if automakers focus on simple, quick purchases tied directly to the driving experience.

 Americans Embrace AI Tech In Their Cars But Some Features Drive Them Crazy


Blind spot cameras stand out as one of the most appreciated technologies, with 93 percent of drivers saying they use them regularly and 74 percent wanting the feature in their next vehicle. Models equipped with blind spot cameras also tend to sell faster than those without, underlining just how valuable the technology has become.

Features That Miss the Mark

By comparison, several other AI features could be improved. For example, J.D. Power concluded that car wash modes becoming increasingly prevalent across the market have lots of room for improvement. These models automatically prepare a vehicle to go through a car wash, but it was found that this mode is often buried within the infotainment system, and 38 percent of owners say they need better instructions on how to use it.

Similar, recognition technologies remain a sticking point, posting the highest problem rates in the study. Biometric authentication alone averaged more than 29 issues per 100 vehicles, while touchless or hidden controls and direct driver monitoring each saw more than 19.

Which Brands Are The Best For Tech?

The study also compared automakers on their overall use of technology. Genesis once again led the pack, taking the top spot for the fifth year in a row, with Cadillac and Lincoln following behind.

The premium segment’s average score was lifted to 671 with Tesla and Rivian included, but both were excluded from the rankings since they did not meet the study’s award criteria. Even so, Tesla posted a standout score of 873 and Rivian followed with 730, according to J.D. Power.

 Americans Embrace AI Tech In Their Cars But Some Features Drive Them Crazy


In the mass-market category, Hyundai claimed the highest score for innovation, followed by Kia and, perhaps more surprisingly, Mitsubishi, which ranked ahead of GMC, MINI, and Toyota.

At the other end of the spectrum, Stellantis brands such as Jeep, Ram, and Chrysler landed at the bottom, while Jaguar held the lowest position among premium marques. And if you’re wondering about Tesla, while giving it a huge score at 873, JD Power said it

 Americans Embrace AI Tech In Their Cars But Some Features Drive Them Crazy


 Americans Embrace AI Tech In Their Cars But Some Features Drive Them Crazy


JD Power



Source link

Continue Reading

Trending