Connect with us

Tools & Platforms

How a generic-sounding tech job will transform AI

Published

on


In the wake of the ChatGPT moment, there was a rush to develop AI models that were “trained” for specific tasks. Thousands of them emerged. Many, like Meta’s popular Llama family of models, were free to use and small enough to run economically and on local servers.

Companies then began “fine-tuning” popular, open-source models like Llama and DeepSeek on proprietary company data to create more personalized versions. But that technique only got companies so far, especially when we began moving to “agentic” AI models that need to take important actions on their own.

Today, there’s another paradigm shift underway. With a technique called “reinforcement learning with verified rewards,” models are taught to aim for a specific goal and then trained on simulations to find the most efficient route.

But that’s not how general-purpose large language models worked in the past. Instead of aiming for a goal, they predicted what would happen next. It’s partly why chatbots, if they start off in the wrong direction because of a poorly worded prompt or some quirk in the training data, will just continue along that wrong path forever.

Ultimately, the trick will be to marry the natural language capability of large language models with a goal-oriented approach.

It’s likely this is just another stop on the way to artificial general intelligence or superintelligence. At some point, frontier models like the ones made by Google, OpenAI and Anthropic will get to a place where they can reliably do almost any rote digital task without any additional training. It’s also possible that all of the customization going on now will generate some of the data necessary to achieve AGI or ASI.

Forward-deployed engineers, the connective tissue binding AI researchers with the real world, are the point at which all of these new techniques will be carried out and tested.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

World Shipping Council Wants to Use AI to Better Cargo Safety

Published

on


The World Shipping Council (WSC) plans to use artificial intelligence to bolster cargo safety. 

The organization announced Monday that it had launched a new initiative, which it calls the Cargo Safety Program, with the goal of preemptively stopping dangerous goods from making it onto ships. The WSC said it will use AI to screen and inspect cargo before it’s loaded on the ship, with the intention of pinpointing misdeclared or undeclared shipments that would be of high risk to ship operators, companies’ cargo and the vessels themselves. 

Joe Kramek, president and CEO of the WSC, said he expects the measures to decrease the number of ship fires that occur. 

“We have seen too many tragic incidents where misdeclared cargo has led to catastrophic fires, including the loss of life,” Kramek said in a statement. “The WSC Cargo Safety Program strengthens the industry’s safety net by combining shared screening technology, common inspection standards, and real-world feedback to reduce risk.”

To date, the WSC said, a variety of ocean freight carriers that account for more than 70 percent of global twenty-foot equivalent unit (TEU) capacity have already joined the initiative. That includes Hapag-Lloyd, Ocean Network Express (ONE), Maersk, CMA CGM and others. 

The screening tool, which leverages technology built by the National Cargo Bureau (NCB), “scans millions of bookings in real time using keyword searches, trade pattern recognition and AI-driven algorithms to identify potential risks,” the WSC said. If the system finds risks or anomalies, it passes that feedback to a carrier; the carrier can then perform manual inspections of the cargo as needed. 

The WSC joins other third-party logistics players—albeit primarily on land—in leveraging AI for safety. Autonomous trucks typically leverage AI and machine learning-based systems to determine the safest route for the vehicles to take; paired with sensors and computer vision, these systems can also alert the driverless vehicle to on-road hazards, including tumultuous weather conditions. 

Additionally, some logistics players have started to leverage robotics in their facilities; increasingly, physical AI helps to ensure those robots don’t collide with or otherwise endanger the human workers they spend time alongside. That’s done both through real-world learnings and through digital twin simulations, which can train robots on millions of inputs far faster than developers could do if they had to manually simulate every situation in the real world. Physical AI is becoming increasingly important because of the rise of autonomous mobile robots (AMRs). Because AMRs move freely around warehouses, factories and other facilities, they have to be able to stop abruptly, 

AI-based monitoring, meant to flag hazards before injury, is also at play in many warehouses; companies like Voxel, which grabbed a Series B round in June, are able to interlink AI systems with existing security cameras and sensors to monitor employee safety. The company’s heat-mapping system uses inputs from the camera to determine high-risk zones in a facility, giving managers real-time suggestions on how to clear up hazards. The Port of Virginia uses such technology to make operations safer. 

The WSC did not clarify in its announcement whether it plans to use the new AI capabilities to stave off safety issues beyond fires; the trade group recently put out a report that said nearly 11.4 percent of inspected cargo shipments have deficiencies. That could mean they have undeclared or misdeclared goods in them; incorrect or mangled packaging; structural issues or wrong documents. 

At the time, the WSC said any of those issues have the propensity to cause major safety problems, including ship fires. Just days ago, right after the WSC issued its safety warning, more than 60 containers toppled from a cargo ship at the Port of Long Beach; the ship carried cargo for retailers like Costco, Target, Walmart and smaller shops. The cause of the incident has yet to be reported by officials. 

If leveraged appropriately, AI scanning technology like the kind the WSC has introduced could help mitigate incidents beyond fires. The organization said the initiative is an extension of its interest in improving safety outcomes for cargo carriers and noted that the Cargo Safety Program will “continue to evolve, with regular updates to its technology and standards to address new and emerging risks. 

Kramek said that, by doing so, he hopes the WSC can help move the needle on safety outcomes but noted that carriers and companies also bear responsibility for protecting workers, ships and cargo. 

“By working together and using the best available tools, we can identify risks early, act quickly and prevent accidents before they happen,” Kramek said in a statement. “The Cargo Safety Program is a powerful new layer of protection, but it does not replace the fundamental obligation shippers have to declare dangerous goods accurately. That is the starting point for safety, and it is required under international law.”



Source link

Continue Reading

Tools & Platforms

Connecticut Professors Fear Dependence, Cognitive Decline Over AI Use

Published

on


(TNS) — Lloydia Anderson has listened to the common complaint among many students her age: a lack of time to get anything done.

Between some juggling jobs and others prioritizing certain classes to get more free time, Anderson said she has heard other students at Central Connecticut State University talk about using ChatGPT to get assignments done.

“Some people use it for time management and some don’t really think an assignment is important,” she said, describing the apathy of some students to the AI chatbot’s increasing role in education.


Anderson said she deliberately does not use ChatGPT for fear that it would diminish her capacity to think. She is equally concerned about her job in the future being replaced by AI. Anderson, a junior at CCSU, is studying sociology and philosophy and hopes to advance to a higher level of education.

AI is changing the landscape of higher education as professors change curriculum and testing methods to try to ensure students are relying on their own thinking rather than AI. Several professors at CCSU who spoke with the Courant shared how they fear AI is causing a decline in cognitive ability and a dependence on technology to complete assignments. But the concern is more than that students are finding a shortcut around doing the work, it’s that students will no longer be able to think critically or perform the simplest of tasks without technological assistance.

In the same measure, AI education experts say AI is not going anywhere and that it is the responsibility of educators to ensure that students are taught to use it ethically and responsibly — to create a way to leverage its use to promote learning instead of replacing the brain’s cognitive thinking skills.

IDENTICAL ASSIGNMENTS, CITATION ERRORS

Teaching an online class over the summer that required students to connect course material to pop culture themes, associate philosophy professor Audra King knew something was awry when she discovered three of the students’ essays were identical in nature, using the same terminology and ideas.

King determined that the students, who did not know each other, used ChatGPT — a troubling trend that she is seeing more often these days.

“A lot of students and I want to say faculty too put a prompt in ChatGPT and have it spit back the answer,” said King. “It is making things harder. They already have a decreased attention span. They have lower critical thinking skills.”

And the authenticity of ChatGPT in some instances also remains a question.

Brian Matzke, digital humanities librarian at the Elihu Burritt Library, said once every couple weeks a student will come to the research desk looking for certain articles they have listed but none of them can be found because they don’t exist.

Ricardo Friaz, assistant professor of philosophy, said ChatGPT is “reproducing patterns of language that includes real citations and inaccurate citations.”

“It reproduces a lot of the biases that go into it,” Friaz said. “It takes from the corpus of knowledge and is reproducing what has been said.”

Vahid Behzadan is associate professor of computer science and data science at the University of New Haven and cofounder of the Connecticut AI Alliance, a consortium of 21 universities, industrial action groups and communities across the state. ChatGPT, Behzadan said, “doesn’t just string words together: It can follow a prompt with several coherent paragraphs, shifting smoothly across subjects and styles.”

“That means it demonstrates not only a strong command of language, but also a practical grasp of commonsense and specialized knowledge,” he said.

COGNITIVE DECLINE

Several studies have emerged showing the correlation between the use of ChatGPT and cognitive decline.

An MIT study released this past June included 54 participants who were assigned into three groups: an LLM (language-generating AI) group that uses ChatGPT, another that uses the Google search engine and a third nothing at all. The participants were required to write an essay.

The study used electroencephalography (EEG) to record the participants’ brain activity, according to information on the study.

Results from the study measuring brain activity over four months found that “LLM users consistently underperformed at neural, linguistic, and behavioral levels,” according to the study.

“These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning,” the study stated.

Another study from researchers at the University of Pennsylvania in 2024 found that “Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT,” according to the Hechinger Report.

Behzadan said the studies are limited and research is still in early stages as use of chatbots evolves. In August, ChatGPT had 700 million weekly users, a number that has quadrupled in the past year.

He emphasized the importance of cognitive fitness, which offers long-term benefits including general health, well-being and quality of life.

Tomas Portillo, a junior at CCSU studying mathematics and philosophy, said he does not use AI in his studies. But many students, he said, rely so heavily on it that they would rather ask ChatGPT for help than a professor.

King has taught philosophy for 17 years and has seen how students approach assignments.

“I do see less students thinking abstractly,” she said, explaining that years ago students would come to class having read as assigned article and ask questions.

“I am seeing less engagement and feedback,” she said. “It is the humanity, the originality and the individuality that is completely lost and vanished when we rely on ChatGPT.”

Matzke said AI really in itself is not what he would call his primary concern.

“If the concept of AI technology were to emerge in a world that already valued critical thinking and humanities, it would be a great opportunity but it is amplifying a lot of existing problems.”

OVERWORKED AND EXHAUSTED

King said students today are overworked and exhausted by juggling numerous priorities.

“They don’t have time and don’t have energy,” she said. “We are in a perfect storm of capitalism. There is also an attention span issue. TikTok and social media have made it easy to rely on outside sources.”

Matzke said that students are very outcome oriented and that they are good at regurgitating facts and writing in a style that is bullet pointed.

“But trying to identify the relationship between the concept and how X leads to Y is much more difficult,” he said.

Portillo said while he does not use AI in his studies, he can never seem to escape it because it’s pervasive in the algorithms on social media and in many facets of everyday life.

“I feel like people are more self-centered nowadays with all this access to technology,” he said. “It seems like social media is the medium between us that adds a layer of obfuscation.”

He also recalled how AI has changed the way classes are structured. Several professors said they have changed curriculum, requiring more in-class essays and quizzes.

“There are a lot more in-class assignments,” said Anderson.

MISUNDERSTANDING AI

Friaz is teaching a philosophy, research and writing course to teach students to research and write in a world where AI claims to be better at doing those things than humans, said that students are often not aware that they are “offloading their thinking” by putting their essays into ChatGPT before handing them in.

“They don’t think they are using AI,” he said. ”Writing is your thinking and they are not aware they are offloading their thinking. It makes it hard to talk about it.”

Friaz said the goal of his class is for “students to become strong researchers and writers by researching AI, experimenting with it and thinking critically about how it affects society.”

In today’s society, Friaz said, students feel pressure to spend time on what is most relevant to their jobs with the premise that “writing is not essential and they don’t have to do it and have to suffer through it.”

AI IS HERE TO STAY

When ChatGPT first became a mainstay at universities in 2022, many universities attempted to ban the use of AI, Behzadan said. But that backfired.

“Whether you are going to ban it or not everyone is going to use it,” Behzadan said, “it is inevitable.”

Behzadan said universities began trying to develop guidelines for ethical use of AI, like citing AI when it’s used.

With AI not going away anytime soon, he said students need to learn how to use AI to be able to adapt to changing skills and job requirements.

“AI is here to stay,” he said. “It has become more advanced and we need to adjust and evolve our curriculum to embrace AI while also enabling our students to make beneficial and ethical use of the AI technology.”

Friaz said the key is “teaching it as a tool to aid you rather than something that thinks for you.”

King said she worries about the future and the continuing emergence of AI.

“The more students are relying on technology to think for them, the less they are going to connect with and grow those emotional connections in the real world,” she said.

©2025 Hartford Courant. Distributed by Tribune Content Agency, LLC.





Source link

Continue Reading

Tools & Platforms

Mastercard adds agentic AI technology ahead of Black Friday | PaymentsSource

Published

on


  • Key Insight: Mastercard is enhancing its agentic AI bench. 
  • What’s at Stake: New forms of technology are changing shopping and payments.
  • Forward Look: Visa and other payment firms are aggressively investing in new forms of AI. 

An agentic artificial intelligence arms race is hitting the payments industry, with Mastercard pushing a fresh menu of products designed to rapidly scale the technology before the end of the year. 
The card network has partnered with Stripe, Google and Ant International’s subsidiary Antom business payments firm to scale agentic payments for digital merchants and platforms. Agentic commerce refers to using agentic AI, a form of artificial intelligence that performs tasks such as shopping or payments with minimal or no human supervision. Mastercard, Visa, PayPal and other companies are pursuing agentic AI to enhance merchant services, creating competitive pressure on banks and financial services companies to develop a strategy.  

“If it takes off, agentic commerce could be as incrementally beneficial as the shift from offline to online commerce,” Jeffries analysts said in a research note on Visa and Mastercard’s moves in agentic AI. “The card networks are expected to be the payment mechanism in argentic AI.”

The card networks can establish trust in the new technology by setting rules around liability/disputes and what constitutes a valid agent-initiated transaction, by registering agents and by tokenizing transactions, Jeffries said.

Mastercard’s AI tools

By the start of the holiday shopping period on November 28 (Black Friday), all Mastercard cardholders will be enabled for Mastercard Agent Pay in the U.S., with global deployment “shortly” after. The card network also released Agent Toolkit, which enables AI assistants and agentic tools to access and interpret Mastercard’s application programming interface documentation using the Model Context Protocol (MCP) server, which connects AI applications to other technology systems. The agent toolkit supports integration with AI platforms like Claude, Cursor, and GitHub Copilot, which Mastercard contends will make its APIs “easier to integrate” into agentic technology. MCP also supports the Agent2Agent protocol, according to Mastercard. 

The card brand additionally released Agent Sign-Up, which lets Agent Toolkit users identify agents: and access AI-enabled Mastercard products. Other products are Insight Tokens, which enable security for agentic commerce, and agentic consulting services. 

Citi and U.S. Bank are the first announced bank partners for the expanded AI shopping tools, which will lead to new services for the banks’ Mastercard cardholders.

Read more about artificial intelligence. Artificial intelligence | American Banker

Mastercard has made several moves into new forms of AI over the past year. The payment network launched Agent Pay earlier in 2025. The platform expands Mastercard’s existing generative AI technology, which enhances customer service, security and onboarding, generating automated responses to customers. Agent Pay mines the card network’s data and AI tools to help shoppers curate a mix of purchases for an event, aid merchants on supply chain management or help a retailer build a marketing or sales program.

The card network has added partners such as Microsoft and IBM, which is contributing B2B technology, and payment firms like PayPal’s Braintree and Checkout.com for security. Agent Pay’s security and authentication use tokenization, a process that replaces existing account numbers with one-off numbers that make the card useless if stolen. Mastercard is also using Databricks software to train the card network’s gen AI engine to produce responses to users with less human interaction. 

Mastercard did not comment on its AI releases by deadline. 

“AI-powered payments aren’t just a trend — they’re a transformation,” said Craig Vosburg, chief services officer at Mastercard, in a release. “Payments must be native to the agentic experience. We’re building the infrastructure for a new generation of intelligent transactions, where consumers and developers can empower AI agents to act on their behalf with trust, transparency and precision.”

Agents on the way

Among other payment firms, PayPal has advanced its agentic AI strategy to focus on travel shopping and payments as an early use, arguing the mix of search, booking and checkout for travel are a good mix for AI. 

Stripe’s strategy includes tools for agentic AI developers, and Block earlier this year launched a similar developer portal for agentic AI. Stripe and Block focus on small to medium sized businesses that are in the early stages of deployment.In China, Alipay last week integrated its agentic AI payment technology in Luckin Coffee, which supports payments through AI conversations. Consumers can use the Luckin Coffee app to place orders and complete checkout through natural language conversations with Luckin’s AI assistant. 

Avoiding agentic AI threatens banks. If banks and merchants do nothing, agentic commerce disintermediation will erode 8% to 13% of gross merchandise volume within two to three years, according to Crone Consulting. 

“Doing nothing is a self-liquidating strategy that directly transfers value to agentic wallets,” Richard Crone, a payments consultant, told American Banker, saying his firm estimates that banks not participating in agentic payments risk a 5% to 15% decline in new account acquisition, a 5% to 15% increase in dormant and closed accounts, and a 7% to 30% increase in natural age per account.

And Mastercard Agent Pay can go live without merchant acceptance, which differs from most new payment technology, Crone said, adding Mastercard’s technology can bypass traditional onboarding. “This breaks the two-sided network model where both issuers and acquirers had to opt in,” Crone said. 

AI agents control product selection and payment routing, Crone said. “If issuers don’t embed agentic payments inside their apps, they risk invisibility as agent-driven wallets divert spend to other tenders.

“That’s why I call Black Friday, just 74 days away, ‘Disintermediation Day,” Crone said. “Mastercard is signaling: ‘ready or not, we’re launching.'”



Source link

Continue Reading

Trending