Connect with us

Tools & Platforms

Manufacturers praise AI Action Plan

Published

on


There are a lot of firsts associated with Mesher, who is now chairman of the board at software provider GainSystems. He’s credited with pioneering cloud networks and software-as-a-service (SaaS) models for supply chains. At Descartes, he helped develop the first multicarrier manifest system, the first shipper-carrier electronic data exchange platform, and the first on-demand logistics networks.

But what people seem to remember most about Mesher is a PowerPoint presentation that he gave while at Gartner in the late 1990s. This presentation laid out what he called the “3 V’s of Supply Chain Framework” and identified visibility, velocity, and variability as key principles that supply chains needed to address in order to achieve a competitive advantage.

Two years ago, in an effort to inspire companies to continue focusing on those principles, Mesher, the Council of Supply Chain Management Professionals (CSCMP), and Agile Business Media & Events launched an award competition that takes place at CSCMP’s annual EDGE Conference. This year, the “3 V’s Business Innovation Award Contest” will feature three finalists presenting their companies’ most impactful application of Mesher’s framework in front of a panel of judges. (Information on the competition, including guidelines for submissions, can be found on the CSCMP EDGE website.)

Mesher took some time out to discuss the 3 V’s and the award program with Supply Chain Xchange Executive Editor Susan Lacefield.

Q: Can you tell me the story of how the 3 V’s Framework was developed?

A: So I knew I was leaving Gartner. At that time, Gartner was extremely competitive, and the idea of being a great analyst was about being able to slay dragons or debunk myths or present things differently. The idea was to create something seminal and anthemic. Could you write a piece of research that people would glom onto like an anthem and would also stand the test of time?

You know, when I talk about this with people, I often ask them, “What’s the most seminal and anthemic rock song?” And there’s a fair bit of debate. I’m a “Hey Jude” fan personally, but “Hell’s Bells” gets a few whistles. Every once in a while, “Stairway to Heaven” comes up. But the idea was to be seminal and anthemic.

I was following the guy who invented [the term] “ERP” (enterprise resource planning)—which was pretty hard to beat if you ask me. So with that commentary, I wanted to do something that was as good. It was a bit of my swan song.

Q: What was the central focus of the presentation?

A: My issue was that as everybody went from looking at supply chains for saving money to looking at supply chains for making more money, standardization would become at war with diversification. And this war between standardizing and diversifying—between saving money and making more money—was going to create enormous tension in the business.

The key thing I was researching was, how could supply chain impact corporate competitiveness? That was the fundamental question we were trying to answer. And my view was the 3 V’s. You need to harness visibility, and then to differentiate [yourself], you have to embrace variability. You can’t eliminate it, you can’t run from it, you’ve just got to embrace it. Then competitive advantage would be rooted in velocity, the speed at which you could recombine your network. Because as you know, supply chains are networks, and they change all the time. It’s the speed of this recombination—this velocity—that would be the real differentiation—as long as you embrace variability. So I wrote the paper, and it had a larger-than-life history.

Q: If you had to give the presentation today, would you change anything?

A: Well, I did a 20-year update [to the framework], which now is eight years old. And what I’m hearing is that most people find it very relevant to today. Which is good because the idea is to be a bit ahead of the pack. [In the update,] we talk a lot about the new surveillance economy and how visibility has really graduated to surveillance.

The update talks about how we need to move from the internet of things to the internet and nouns and verbs. So people, places, and things all interconnecting and feeding real-time information engines that understand actions (or verbs) performed by subjects (or nouns) and the state or place to create end-to-end supply chain visibility. Right now, they’re calling that the “agentic internet.”

So that’s the evolution of the notion of visibility as it moves from, you know, tracking my truck to surveilling my suppliers and child-labor compliance and carbon usage. And by the way, we’re watching you all the time. So we’re controlling what you want to buy for sure, and you’re under constant surveillance, right?

The story I use to illustrate this is that I’m a Jewish guy, and I dated a Christian woman for eight years before we got married. While I was in the middle of dating, I started getting christianmingle.com advertisements. How did they know? They were watching. They knew. And it’s an ever-expanding gas, right? It’s just gets broader and broader and broader but more hyperspecialized.

Q: Can you explain what you mean by broader but more hyperspecialized?

A: The whole premise is based on Darwin’s law and hyperspecialization over time. If you study evolution, you know the theory of evolution is hyperspecialization and survival of the most adaptive types. As an example, we started off with computer dating, right? And it was very general, right? And then someone got into a bit of the Russian dating thing, and then we got into the farmers dating thing and the Christians dating thing. We have become more hyperspecialized. OK, now we enter real time. The world is now always on, always connected. And it went to Tinder. And then it got hyperspecialized and went to Grindr. So that’s the notion of generalized to hyperspecialized. So what I did was really try to understand how the evolution and proliferation of the microprocessor and the internet and constant Wi-Fi communication would change everything. And what I say is it’s going to evolve into hyperspecialized subcommunities of common interest over time.

I also just published a piece [on the GainSystems website] called “The Call for Awareness in the Age of AI,” which has gone about as cultlike as my 3 V’s. It has about 40,000 downloads. I basically say, “Can everyone shut up please about this AI stuff and just figure out what really matters and pay attention to the good, old-fashioned stuff?” Because I’ve seen this movie before. I was around in 2000 [during the dot-com era], when everyone lied to everyone the last time. It’s the great big lie déjà vu. The truth is, if you don’t know what you’re trying to transform, no technology, however advanced, can transform it for you. It’s a bit of a call to action to wake the hell up and stop chasing shiny objects and get back to the basics and stop buying everyone’s BS.

Q: Let’s talk a bit about the awards themselves. What are you hoping they will accomplish?

A: Oh, there are two things. One, as it relates to CSCMP and its conference, is that having everyone sit in their rooms with their arms folded listening to a presentation is a bit of a stale last-century model. We needed to make things more fun and more interesting. So I wanted to just help make the event better.

The second thing that was really much more important to me, personally, is that I’m tired of hearing everyone say, “Everybody sucks and fails all the time.” I think it’s important to celebrate success. I think CSCMP needs to do more of that.

Editor’s note: Judging for this year’s 3 V’s competition will be held during the CSCMP EDGE 2025 conference in National Harbor, Maryland. The live competition will feature three finalists presenting before a panel of distinguished judges in the Innovation Theater, sponsored by Knapp, on Monday, Oct. 6. The session will be hosted by Art Mesher himself alongside Rick Blasgen, former president and CEO of CSCMP. To register for the conference, which runs from Oct. 5 to 8, go to www.edgetechconference.com/page/4283440/register.

Primary Tag: 3 V’s framework

Tags: Art Mesher, visibility, variability, velocity, CSCMP, Gains Systems

Section(s): Featured, Plan (primary), Tech



Source link

Tools & Platforms

New AI Tool Predicts Treatments That Reverse Cell Disease

Published

on


In a move that could reshape drug discovery, researchers at Harvard Medical School have designed an artificial intelligence model capable of identifying treatments that reverse disease states in cells.

Unlike traditional approaches that typically test one protein target or drug at a time in hopes of identifying an effective treatment, the new model, called PDGrapher and available for free, focuses on multiple drivers of disease and identifies the genes most likely to revert diseased cells back to healthy function.

The tool also identifies the best single or combined targets for treatments that correct the disease process. The work, described Sept. 9 in Nature Biomedical Engineering, was supported in part by federal funding.

By zeroing in on the targets most likely to reverse disease, the new approach could speed up drug discovery and design and unlock therapies for conditions that have long eluded traditional methods, the researchers noted.

“Traditional drug discovery resembles tasting hundreds of prepared dishes to find one that happens to taste perfect,” said study senior author Marinka Zitnik, associate professor of biomedical informatics in the Blavatnik Institute at HMS. “PDGrapher works like a master chef who understands what they want the dish to be and exactly how to combine ingredients to achieve the desired flavor.”

The traditional drug-discovery approach — which focuses on activating or inhibiting a single protein — has succeeded with treatments such as kinase inhibitors, drugs that block certain proteins used by cancer cells to grow and divide. However, Zitnik noted, this discovery paradigm can fall short when diseases are fueled by the interplay of multiple signaling pathways and genes. For example, many breakthrough drugs discovered in recent decades — think immune checkpoint inhibitors and CAR T-cell therapies — work by targeting disease processes in cells.

The approach enabled by PDGrapher, Zitnik said, looks at the bigger picture to find compounds that can actually reverse signs of disease in cells, even if scientists don’t yet know exactly which molecules those compounds may be acting on.

How PDGrapher works: Mapping complex linkages and effects

PDGrapher is a type of artificial intelligence tool called a graph neural network. This tool doesn’t just look at individual data points but at the connections that exist between these data points and the effects they have on one another.

In the context of biology and drug discovery, this approach is used to map the relationship between various genes, proteins, and signaling pathways inside cells and predict the best combination of therapies that would correct the underlying dysfunction of a cell to restore healthy cell behavior. Instead of exhaustively testing compounds from large drug databases, the new model focuses on drug combinations that are most likely to reverse disease.

PDGrapher points to parts of the cell that might be driving disease. Next, it simulates what happens if these cellular parts were turned off or dialed down. The AI model then offers an answer as to whether a diseased cell would happen if certain targets were “hit.”

“Instead of testing every possible recipe, PDGrapher asks: ‘Which mix of ingredients will turn this bland or overly salty dish into a perfectly balanced meal?’” Zitnik said.

Advantages of the new model

The researchers trained the tool on a dataset of diseased cells before and after treatment so that it could figure out which genes to target to shift cells from a diseased state to a healthy one.

Next, they tested it on 19 datasets spanning 11 types of cancer, using both genetic and drug-based experiments, asking the tool to predict various treatment options for cell samples it had not seen before and for cancer types it had not encountered.

The tool accurately predicted drug targets already known to work but that were deliberately excluded during training to ensure the model did not simply recall the right answers. It also identified additional candidates supported by emerging evidence. The model also highlighted KDR (VEGFR2) as a target for non-small cell lung cancer, aligning with clinical evidence. It also identified TOP2A — an enzyme already targeted by approved chemotherapies — as a treatment target in certain tumors, adding to evidence from recent preclinical studies that TOP2A inhibition may be used to curb the spread of metastases in non-small cell lung cancer.

The model showed superior accuracy and efficiency, compared with other similar tools. In previously unseen datasets, it ranked the correct therapeutic targets up to 35 percent higher than other models did and delivered results up to 25 times faster than comparable AI approaches.

What this AI advance spells for the future of medicine

The new approach could optimize the way new drugs are designed, the researchers said. This is because instead of trying to predict how every possible change would affect a cell and then looking for a useful drug, PDGrapher right away seeks which specific targets can reverse a disease trait. This makes it faster to test ideas and lets researchers focus on fewer promising targets.

This tool could be especially useful for complex diseases fueled by multiple pathways, such as cancer, in which tumors can outsmart drugs that hit just one target. Because PDGrapher identifies multiple targets involved in a disease, it could help circumvent this problem.

Additionally, the researchers said that after careful testing to validate the model, it could one day be used to analyze a patient’s cellular profile and help design individualized treatment combinations.

Finally, because PDGrapher identifies cause-effect biological drivers of disease, it could help researchers understand why certain drug combinations work — offering new biological insights that could propel biomedical discovery even further.

The team is currently using this model to tackle brain diseases such as Parkinson’s and Alzheimer’s, looking at how cells behave in disease and spotting genes that could help restore them to health. The researchers are also collaborating with colleagues at the Center for XDP at Massachusetts General Hospital to identify new drug targets and map which genes or pairs of genes could be affected by treatments for X-linked Dystonia-Parkinsonism, a rare inherited neurodegenerative disorder.

“Our ultimate goal is to create a clear road map of possible ways to reverse disease at the cellular level,” Zitnik said.

Reference: Gonzalez G, Lin X, Herath I, Veselkov K, Bronstein M, Zitnik M. Combinatorial prediction of therapeutic perturbations using causally inspired neural networks. Nat Biomed Eng. 2025:1-18. doi: 10.1038/s41551-025-01481-x

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.



Source link

Continue Reading

Tools & Platforms

Driving the Way to Safer and Smarter Cars

Published

on


A new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs.

As autonomous vehicles have only begun to appear on limited public roads, it has become clear that achieving widespread adoption will take longer than early predictions suggested. With Level 3 systems in place, the road ahead leads to full autonomy and Level 5 self-driving. However, it’s going to be a long climb. Much of the technology that got the industry to Level 3 will not scale in all the needed dimensions—performance, memory usage, interconnect, chip area, and power consumption.

This paper looks at the challenges waiting down the road, including increasing AI operations while decreasing power consumption in realizable solutions. It introduces a new, scalable neural processing technology based on co-designed hardware and software IP for customized, heterogeneous SoCs that can help solve them.

Read more here.



Source link

Continue Reading

Tools & Platforms

Tech companies are stealing our books, music and films for AI. It’s brazen theft and must be stopped | Anna Funder and Julia Powles

Published

on


Today’s large-scale AI systems are founded on what appears to be an extraordinarily brazen criminal enterprise: the wholesale, unauthorised appropriation of every available book, work of art and piece of performance that can be rendered digital.

In the scheme of global harms committed by the tech bros – the undermining of democracies, the decimation of privacy, the open gauntlet to scams and abuse – stealing one Australian author’s life’s work and ruining their livelihood is a peccadillo.

But stealing all Australian books, music, films, plays and art as AI fodder is a monumental crime against all Australians, as readers, listeners, thinkers, innovators, creators and citizens of a sovereign nation.

The tech companies are operating as imperialists, scouring foreign lands whose resources they can plunder. Brazenly. Without consent. Without attribution. Without redress. These resources are the products of our minds and humanity. They are our culture, the archives of our collective imagination.

If we don’t refuse and resist, not just our culture but our democracy will be irrevocably diminished. Australia will lose the wondrous, astonishing, illuminating outputs of human creative toil that delight us by exploring who we are and what we can be. We won’t know ourselves any more. The rule of law will be rendered dust. Colony indeed.

Tech companies have valorised the ethos “move fast and break things”, in this case, the law and all it binds. To “train” AI, they started by “scraping” the internet for publicly available text, a lot of which is rubbish. They quickly realised that to get high-quality writing, thinking and words they would have to steal our books. Books, as everyone knows, are property. They are written, often over years, licensed for production to publishers and the rental returns to authors are called royalties. No one will write them if they can be immediately stolen.

Copyright law rightfully has its critics, but its core protections have enabled the flourishing of book creation and the book business, and the wide (free but not “for free”) transmission of ideas. Australian law says you can quote a limited amount from a book, which must be attributed (otherwise it’s plagiarism). You cannot take a book, copy it entirely and become its distributor. That is illegal. If you did, the author and the publisher would take you to court.

Yet what is categorically disallowed for humans is being seriously discussed as acceptable for the handful of humans behind AI companies and their (not yet profit-making) machines.

To the extent they care, tech companies try to argue the efficiency or necessity of this theft rather than having to negotiate consent, attribution, appropriate treatment and a fee, as copyright and moral rights require. No kidding. If you are setting up a business, in farming or mining or manufacturing or AI, it will indeed be more efficient if you can just steal what you need – land, the buildings someone else constructed, the perfectly imperfect ideas honed and nourished through dedicated labour, the four corners of a book that ate a decade.

Under the banner of progress, innovation and, most recently, productivity, the tech industry’s defence distils to “we stole because we could, but also because we had to”. This is audacious and scandalous, but it is not surprising. What is surprising is the credulity and contortions of Australia’s political class in seriously considering retrospectively legitimising this flagrantly unlawful behaviour.

The Productivity Commission’s proposal for legalising this theft is called “text and data mining” or TDM. Socialised early in the AI debate by a small group of tech lobbyists, the open secret about TDM is that even its proponents considered it was an absolute long shot and would not be taken seriously by Australian policymakers.

Devised as a mechanism primarily to support research over large volumes of information, TDM is entirely ill-suited to the context of unlawful appropriation of copyright works for commercial AI development. Especially when it puts at risk the 5.9% of Australia’s workforce in creative industries and, speaking of productivity, the $160bn national contribution they generate. The net effect if adopted would be that the tech companies can continue to take our property without consent or payment, but additionally without the threat of legal action for breaking the law.

Let’s look at just who the Productivity Commission would like to give this huge free-kick to.

Big Tech’s first fortunes were made by stealing our personal information, click by click. Now our emails can be read, our conversations eavesdropped on, our whereabouts and spending patterns tracked, our attention frayed, our dopamine manipulated, our fears magnified, our children harmed, our hopes and dreams plundered and monetised.

The values of the tech titans are not only undemocratic, they are inhumane. Mark Zuckerberg’s empathy atrophied as his algorithm expanded. He has said, “A squirrel dying in front of your house may be more relevant to you right now than people dying in Africa.” He now openly advocates “a culture that celebrates aggression” and for even more “masculine energy” in the workplace. Eric Schmidt, former head of Google, has said, “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”

The craven, toadying, data-thieving, unaccountable broligarchs we saw lined up on inauguration day in the US have laid claim to our personal information, which they use for profit, for power and for control. They have amply demonstrated that they do not have the flourishing of humans and their democracies at heart.

And now, to make their second tranche of fortunes under the guise of AI, this sector has stolen our work.

Our government should not legalise this outrageous theft. It would be the end of creative writing, journalism, long-form nonfiction and essays, music, screen and theatre writing in Australia. Why would you work if your work can be stolen, degraded, stripped of your association, and made instantly and universally available for free? It will be the end of Australian publishing, a $2bn industry. And it will be the end of us knowing ourselves by knowing our own stories.

Copyright is in the sights of the technology firms because it squarely protects Australian creators and our national engine of cultural production, innovation and enterprise. We should not create tech-specific regulation to give it away to this industry – local or overseas – for free, and for no discernible benefit to the nation.

The rub for the government is that much of the mistreatment of Australian creators involves acts outside Australia. But this is all the more reason to reinforce copyright protection at home. We aren’t satisfied with “what happens overseas stays overseas” in any other context – whether we’re talking about cars or pharmaceuticals or modern slavery. Nor should we be when it comes to copyright.

Over the last quarter-century, tech firms have honed the art of win-win legal exceptionalism. Text and data mining is a win if it becomes law, but it’s a win even if it doesn’t – because the debate itself has very effectively diverted attention, lowered expectations, exhausted creators, drained already meagerly resourced representatives and, above all, delayed copyright enforcement in a case of flagrant abuse.

So what should the government do? It should strategise, not surrender. It should insist that any AI product made available to Australian consumers demonstrate compliance with our copyright and moral rights regime. It should require the deletion of stolen work from AI offerings. And it should demand the negotiation of proper – not token or partial – consent and payment to creators. This is a battle for the mind and soul of our nation – let’s imagine and create a future worth having.

Anna Funder is the author of the prize-winning international bestsellers Stasiland, All That I Am and Wifedom: Mrs Orwell’s Invisible Life. Julia Powles is a law professor and executive director of the Institute for Technology, Law & Policy at the University of California Los Angeles and former contributing editor and policy fellow at The Guardian



Source link

Continue Reading

Trending