AI Insights
Experts share AI strategies implementation and risks

The Blueprint
- New Jersey professionals shared how local firms deploy AI responsibly.
- Experts stressed risk assessment, governance and transparency.
- Upskilling and pilot projects help companies adopt AI safely.
- Data security, trusted partners and flexibility remain key themes.
NEW JERSEY — NJBIZ recently hosted a panel discussion about how rapidly artificial intelligence is evolving and affecting society in myriad ways.
The 90-minute discussion, moderated by NJBIZ Editor Jeffrey Kanige, featured a panel of experts comprising:
- Joshua Levy, general counsel at Gibbons PC and director of the Business & Commercial Litigation Group.
- Carl Mazzanti, co-founder and president at eMazzanti Technologies.
- Hrishikesh Pippadipally, chief information officer at Wiss.
- Mike Stubna, director of software engineering and data science at Hackensack Meridian Health.
- Oya Tukel, dean of the Martin Tuchman School of Management at the New Jersey Institute of Technology.
The discussion opened with the panelists discussing how their respective organizations are deploying AI both within their own organizations and for stakeholders. (See video.)
‘Where do I start?’
A lot of companies these days are seeing the growing number of AI tools available and asking where to start.
Hackensack Meridian Health’s Mike Stubna said one of the most important beginning steps is weighing the risks associated with any AI tool.
“There’s a few components of our overall strategy. As I mentioned, one of the big components is a comprehensive risk assessment,” said Stubna. So, before adopting any sort of new AI-powered tool, it’s important to attempt to understand and quantify the risks associated with it.
“For example, maybe you’re using a service to summarize pages and pages of information. If you’re sending this service proprietary information, and they have the ability to save that and use it for their own purposes, that’s a risk, right? Because that might be leaked out by them. So, understanding the risk is a really big part of the strategy for every solution.”
He said that the other component is the value that the solution provides.
“So, in many cases with, especially workflow optimization or administrative efficiencies, the value could be quantified in terms of hours saved,” Stubna explained. “Of course, vendors will tell you that their solution will save so much time, but it’s really important to do your own due diligence.”
AI guardrails
Hrishikesh Pippadipally said that when thinking of an AI strategy, business leaders need to establish responsible AI governance.
“What’s your company policy? What software can we use?” the Wiss CIO asked. He noted that his firm has strict guardrails and approvals on what can and cannot not be used. “So, that’s more from the risk perspective. But moving on from there, in terms of adoption — and how do we structure this within our firm — I would probably phrase this in terms of people process technology.
“Technology is only as good as those using it. If you don’t know how to use it, there’s no use to it. So, we have done a big round of upskilling AI training six months ago.”
He stressed that the training process is important because it entails actually going through real use cases from the pilots that the company conducted.
Joshua Levy said, “I certainly agree that the evaluation of the risk is paramount. And I think I mentioned before, for us, confidentiality of our data and our clients data is front of mind — and then also scrutinizing the value. And so maybe to approach it from an angle that hasn’t been discussed: expense.
“At Gibbons, we are about a 150 to 160 attorney law firm, and we have the overhead we have. But there are firms larger than us that can afford maybe multiple highly overlapping platforms. And I’m sure there are firms smaller that have to make even more careful decisions about which technologies to invest in. It’s an ever-changing landscape, and we try and approach it with humility and understanding that what we may be investing in now might not necessarily be what we need in the future.”
Securing data
Mazzanti said that there is so much “tool creep” taking place because everyone’s offering something and each saying that they are the best. He noted that his firm is seeing trends within clients’ organizations where a heavy number of users are already utilizing a particular tool.
“Maybe we should consider embracing that to the organization because your staff already voted,” Mazzanti said about what he will tell these clients. “They’ve already said that this is the one they like the most and you already have a bunch of heavy users. Maybe we could do that.”
In referencing risk assessment and governance, Mazzanti suggested digging a moat around data.
“So, robotic process automation has been around for a long time. We’ve been feeding it data to have it go do these individual tasks. That was before there was some intelligence or the generative concept came across. Well, when you were feeding the data, it was typically your own set in in your own servers in your own environment being run. And now that it’s generally available and you can rent, not a lot of the tools are offering to put security around your data,” he explained.
“I’m very surprised at customers who deploy without any sort of plan whatsoever,” he said, pointing to incidents that can occur with data being leaked or breached because the proper controls were not set up. “Privacy tags — super important. Start doing that around your HR, your offer letters, your salaries, things like that.”
Staying vigilant
The panelists also discussed the need for training when it comes to using AI tools and how difficult it is to grasp these new concepts.
Tukel noted that the next generation is very tech savvy.
“I have to say, maybe it’s more encouraging for faculty to be in the forefront together with the students using the technology. Because sometimes we are skeptical about what a technology can do,” Tukel explained. “So, we always go back to the fundamentals of what we need to learn. I agree with Carl [Mazzanti] that there are a lot of loose ends with the new technology — up until it solidifies.”
Because of that reality, Tukel said that workers understanding the AI strategy is very critical.
Be transparent
From there, the conversation snaked through a number of topics around AI, such as the limits of what the technology can and cannot do — and should and should not do; the risks and potential pitfalls of the technology; cybersecurity; the workforce impacts; regulatory issues; inclusivity and more.
Tukel stressed the need for AI transparency.
“I think it is OK to not look at this tool as a shortcut that covers our areas of deficiencies. It’s a tool that helps us,” said Tukel. She noted, however, that the technology still has its problems in terms of bias in the data and algorithm. “But declaring that this was prepared by me — but using AI tools — in your documentation for public-facing writings and news you are putting out, can definitely put you in a better position.”
“[J]ust know that if you come to Hackensack Meridian Health, that we’re on the forefront of using AI,” said Stubna. “And you can have a lot of confidence that it’s something that we take extremely seriously — ensuring that this is used in a responsible way that really focuses on patient care first and foremost.”
“At Wiss, we have heavy transparency,” said Pippadipally. “We try to use AI to enrich the product of client outcomes as much as we can. Anything, any financial issues that you guys have, feel free to reach out.”
“If we had to leave someone with parting words here – it would be, choose a good partner that’s done this – binds your vision and values and know you support to walk hand-in-hand with your suppliers,” said Mazzanti, stressing service delivery. “My organization is incredibly partner friendly. We work with your team to evolve with your best interest at heart. You’ve heard from some great panelists here today.
Stay flexible
“My own approach to this is, simply, keeping my eyes open and trying to stay humble in this ever-changing environment,” Levy said. Even if I’m here, even if all of us are here, because in some ways we’re experts in this arena, I don’t know that I think anyone truly has a perfect understanding of where any of our industries are going to be in five years, 10 years, 20 years.
“And we just have to be flexible and keep that in mind — and work with the right folks to navigate the future.”
AI Insights
Artificial intelligence offering political practices advice about robocalls in Montana GOP internal spat

A version of this story first appeared in Capitolized, a weekly newsletter featuring expert reporting, analysis and insight from the editors and reporters of Montana Free Press. Want to see Capitolized in your inbox every Thursday? Sign up here.
The robocalls to John Sivlan’s phone this summer just wouldn’t let up. Recorded messages were coming in several times a day from multiple phone numbers, all trashing state Republican Rep. Llew Jones, a shrewd, 11-term lawmaker with an earned reputation for skirting party hardliners to pass the Legislature’s biggest financial bills, including the state budget.
Sivlan, 80, a lifelong Republican who lives in Jones’ northcentral Montana hometown of Conrad, wasn’t amused by the general election-style attacks hitting his phone nearly a year before the next legislative primary. Jones, in turn, wasn’t impressed with the Commissioner of Political Practices’ advice that nothing could be done about the calls. The COPP polices campaigns and lobbying in Montana, and the opinion the office issued in response to a request from Jones to review the robocalls was written not by an office employee but instead authored by ChatGPT.
“They were coming in hot and heavy in July,” Sivlan said on Aug. 26 while scrolling through his messages. “There must be dozens of these.”
“Did you know that Llew Jones sides with Democrats more than any other Republican in the Montana Legislature? If he wants to vote with Democrats, Jones should at least switch parties,” the robocalls said.
“And then they list his number and tell you to call him and tell him,” Sivlan continued.
In addition to the robocalls, a string of ads running on streaming services targeted Jones. On social media, placement ads depicted Jones as the portly, white-suited county commissioner Boss Hogg from “The Dukes of Hazzard” TV comedy of the early 1980s. None of the ads or calls disclosed who was paying for them.
Jones told Capitolized that voters were annoyed by the messaging, but said most people he’s talked to weren’t buying into it. He assumes the barrage was timed to reach voters before his own campaign outreach for the June 2026 primary.
The COPP’s new AI helper concluded that only ads appearing within 60 days of an election could be regulated by the office. The ads would also have to expressly advise the public on how to vote to fall under campaign finance reporting requirements.
In the response emailed to Jones, the AI program followed its opinion with a very chipper “Would you like guidance on how to monitor or respond to such ads effectively?”
“I felt that it was OK,” Commissioner Chris Gallus said of the AI opinion provided to Jones. “There were some things that I probably would have been more thorough about. Really at this point I wanted Llew to see where we were at that time with the (AI) build-out, more than explicit instructions.”
The plan is to prepare the COPP’s AI system for the coming 2026 primary elections, at which point members of the COPP staff will review the bot’s responses and supplement when necessary. But the system is already on the commissioner’s website, offering advice based solely on Montana laws and COPP’s own data, and not on what it might scrounge from the internet, according to Gallus.
Earlier this year, the Legislature put limits on AI use by government agencies, including a requirement for government disclosure and oversight of decisions and recommendations made by AI systems. The bill, by Rep. Braxton Mitchell, R-Columbia Falls, was opposed by only a handful of lawmakers.
Gallus said the artificial intelligence system at COPP is being built by 3M Data, a vendor with previous experience with machine learning for the Red Cross and the oil companies Shell and Exxon, where systems gathered and analyzed copious amounts of operational data. COPP has about $38,000 to work with, Gallus said.
The pre-primary battles within the Montana Republican Party are giving the COPP’s machine learning an early test, while also exposing loopholes in campaign reporting laws.
There is no disclosure law for the ads placed on streaming services, unlike ad details for traditional radio and TV stations, cable and satellite, which must be available for public inspection under Federal Communications Commission law. The state would have to fill that gap, which the FCC and Federal Election Commission have struggled to do since 2011.
Streaming now accounts for 45% of all TV viewing, according to Nielsen, more than broadcast and cable combined. Cable viewership has declined 39% since 2021.
“When we asked KSEN (a popular local radio station) who was paying for the ads, they didn’t know,” Jones said. “People were listening on Alexa.”
Nonetheless, Jones said the robocalls are coming from within the Republican house. An effort by hardliners to purge more centrists legislators from the party has been underway since April, when the MTGOP executive board began “rescinding recognition” of the state Republican senators who collaborated with a bipartisan group of Democrats and House Republicans to pass a budget, increase teacher pay and lower taxes on primary homes.
Being Republican doesn’t require recognition by the MTGOP “e-board,” as it’s known. In June, when the party chose new leadership, newly elected Chair Art Wittich said the party would no longer stay neutral in primary elections and would look for conservative candidates to support.
Republicans who have registered campaigns for the Legislature were issued questionnaires Aug. 17 by the Conservative Governance Committee, a group chaired by Keith Regier, a former state legislator and father of a Flathead County family that’s sent three members to the Montana Legislature; in 2023 Keith Regier and two of his children served in the Legislature simultaneously.
Membership for the Conservative Governance Committee and a new Red Policy Committee to prioritize legislative priorities is still a work in progress, new party spokesman Ethan Holmes said this week.
The 14 questions, which Regier informed candidates could be used to determine party support of campaigns, hit on standard Republican fare: guns, “thoughts on transgenderism,” and at what point human life starts. There was no question about a willingness to follow caucus leadership. Regier’s son, Matt, was elected Senate president late 2024, but lost control of his caucus on the first day of the legislative session in January.
AI Insights
“AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity

Opaque AI systems risk undermining human rights and dignity. Global cooperation is needed to ensure protection.
The rise of artificial intelligence (AI) has changed how people interact, but it also poses a global risk to human dignity, according to new research from Charles Darwin University (CDU).
Lead author Dr. Maria Randazzo, from CDU’s School of Law, explained that AI is rapidly reshaping Western legal and ethical systems, yet this transformation is eroding democratic principles and reinforcing existing social inequalities.
She noted that current regulatory frameworks often overlook basic human rights and freedoms, including privacy, protection from discrimination, individual autonomy, and intellectual property. This shortfall is largely due to the opaque nature of many algorithmic models, which makes their operations difficult to trace.
The black box problem
Dr. Randazzo described this lack of transparency as the “black box problem,” noting that the decisions produced by deep-learning and machine-learning systems cannot be traced by humans. This opacity makes it challenging for individuals to understand whether and how an AI model has infringed on their rights or dignity, and it prevents them from effectively pursuing justice when such violations occur.

“This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo said.
“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.
“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”
Global approaches to AI governance
Currently, the world’s three dominant digital powers – the United States, China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.
Dr. Randazzo said the EU’s human-centric approach is the preferred path to protect human dignity, but without a global commitment to this goal, even that approach falls short.
“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said.
“Humankind must not be treated as a means to an end.”
Reference: “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” by Maria Salvatrice Randazzo and Guzyal Hill, 23 April 2025, Australian Journal of Human Rights.
DOI: 10.1080/1323238X.2025.2483822
The paper is the first in a trilogy Dr. Randazzo will produce on the topic.
Never miss a breakthrough: Join the SciTechDaily newsletter.
AI Insights
Mexico says works created by AI cannot be granted copyright

In an era where artwork is increasingly influenced and even created by Artificial Intelligence (AI), Mexico’s Supreme Court (SCJN) has ruled that works generated exclusively by AI cannot be registered under the copyright regime. According to the ruling, authorship belongs solely to humans.
“This resolution establishes a legal precedent regarding AI and intellectual property in Mexico,” the Copyright National Institute (INDAUTOR) said on Aug. 28 in a statement on its official X account following the SCJN’s decision.
The SCJN’s unanimous decision said that the Federal Copyright Law (LFDA) reserves authorship to humans, and that any creative invention generated exclusively by algorithms lacks a human author to whom moral rights can be attributed.
According to the Supreme Court, automated systems do not possess the necessary qualities of creativity, originality and individuality that are considered human attributes for authorship.
“The SCJN resolved that copyright is a human right exclusive to humans derived from their creativity, intellect, feelings and experiences,” it said.
The Supreme Court resolved that works generated autonomously by artificial intelligence do not meet the originality requirements of the LFDA. It said that those requirements are constitutional as limiting authorship to humans is “objective, reasonable and compatible with international treaties.”
It further added that protections to AI can’t be granted on the same basis as humans, since both have intrinsically different characteristics.
What was the case about?
In August 2024, INDAUTOR denied the registration application for “Virtual Avatar: Gerald García Báez,” created with an AI dubbed Leonardo, on the basis that it lacked human intervention.
“The registration was denied on the grounds that the Federal Copyright Law (LFDA) requires that works be of human creation, with the characteristic of originality as an expression of the author’s individuality and personality,” INDAUTOR said.
The applicant contested the denial, arguing that creativity should not be restricted to humans. In the opinion of the defendant, excluding works generated by AI violated the principles of equality, human rights and international treaties, including the United States, Mexico and Canada agreement (USMCA) and the Berne Convention.
However, the Supreme Court clarified that such international treaties do not oblige Mexico to give copyrights to non-human entities or to extend the concept of authorship beyond what is established in the LFDA.
Does the resolution allow registration of works generated with AI?
Yes, provided there is a substantive and demonstrable human contribution. This means that works created in collaboration with AI, in which humans direct, select, edit or transform the result generated by AI until it is endowed with originality and a personal touch, are subject to registration before INDAUTOR.
Intellectual property specialists consulted by the newspaper El Economista explained that to register creative work developed in collaboration with AI, it is important to document the human intervention and submit the creative process in a way that aligns with the LFDA.
Mexico News Daily
-
Business3 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies