Connect with us

Tools & Platforms

US Senator Cruz Proposes AI ‘Sandbox’ to Ease Regulations on Tech Companies

Published

on


Republican U.S. Senator Ted Cruz introduced a bill on Wednesday that would let artificial intelligence companies apply for exemptions to federal regulation to help them experiment in developing new technology.

Cruz leads the Senate Commerce Committee, which held a subcommittee hearing on Wednesday about ways Congress can lower regulatory hurdles for the tech industry to give U.S. companies a boost in competing with China.

“A regulatory sandbox is not a free pass. People creating or using AI still have to follow the same laws as everyone else,” Cruz said at the hearing.

If passed by Congress, the bill would allow agencies that oversee federal regulations to consider applications from companies to be exempt from regulations for two years at a time, and require companies to outline the potential safety and financial risks and how they would mitigate them.

Consumer rights advocacy group Public Citizen raised concerns about provisions in the proposal that would allow the White House’s Office of Science and Technology Policy to override decisions by agency heads, and said the proposal “treats Americans as test subjects.”

“The sob stories of AI companies being ‘held back’ by regulation are simply not true and the record company valuations show it,” said J.B. Branch, Public Citizen’s Big Tech accountability advocate.

Cruz’s sandbox bill does not include a ban on state regulation, something that the tech industry has sought and the White House has said is necessary to boost innovation. A bid to put a ban in place as part of President Donald Trump’s tax-and-spending bill was defeated in the Senate on a 99-1 vote in July.

OSTP Director Michael Kratsios said at the hearing that Congress should reconsider the issue.

“It’s something that my office wants to work very closely with you on,” he told Cruz at the hearing.

Topics
USA
InsurTech
Legislation
Data Driven
Artificial Intelligence
Tech

Was this article valuable?


Here are more articles you may enjoy.

Interested in Ai?

Get automatic alerts for this topic.



Source link

Tools & Platforms

Where the AI Revolution Meets a Challenged Accounting Profession

Published

on


By Max Schultz and Aaron Veach.

For all the talk of how AI will transform and disrupt the accounting sector, it’s important to recognize that the assistance of machines is nothing new. In fact, it predates green-screen computers and even calculators. How far back does it go? Try 1642, when French mathematician and physicist Blaise Pascal invented the first adding machine—at age 19.

Yet for all the history of humans adapting to and adopting new technology, the question arises today whether enough professionals will remain to leverage it. A workforce shortage looms, as the American Institute of CPAs reports that 75% of today’s public accounting CPAs will retire within the next 15 years. That amount far exceeds the number of accountants who enter the workforce. Citing AICPA figures, the Wall Street Journal reported that 47,070 students earned a bachelor’s degree in accounting in the 2021 to 2022 academic year, down 7.8% year-over-year and 15% from the 2011-2012 peak.

Indeed, those numbers have prompted cause for concern; one industry publication has labelled it “a severe crisis.”[LC1]   In sum, the workforce is aging; veterans are retiring; fewer grads are entering the profession; and it’s become harder than ever to hire for lost skill sets at the same cost. Thus, organizations have no choice but to embrace AI and automation to operate a reliable, future-proof corporate accounting function lest they risk blowing their budgets with skyrocketing headcount costs across G&A.

But an overarching fact remains salient: Many accounting and finance departments don’t yet know what’s possible with a well-integrated automation fabric, which helps knit disparate systems data into a seamless close. Seizing those potentials is a cause for optimism, as the industry attrition runs counterbalance to the technology.

That is, AI-enhanced automation helps reduce manual workloads, easing staffing shortages and helping orchestrate a seamless close. Getting started, it’s crucial to recognize and then overcome the main barriers to AI and automation implementation.

It’s amazing, though, how many reputable businesses haven’t taken that first step. We talk to Fortune 1000 companies every single day that still operate across tens-to-hundreds of spreadsheets to tackle the ever-increasing amount of the prep work that goes into closing the books. It’s often a case of operating in discrete systems and dispersed manners (with no version control or audit trail) and, metaphorically speaking, dull pencils.

Many of these companies must collect financial data out of legacy systems, some home built. As a result, data consolidation and preparation in order to run accruals, reclassifications and provisions, for example, creates a massive manual workload. Now compare that to sitting on top of an automation platform that connects deeply and natively into SAP and those legacy systems – and can automate most of that manual work.

This would feature purpose-built SAP components to leverage all SAP-native transaction codes, as well as an ecosystem of connectors to the systems that surround it. Though this may sound like a tall order, the fact is that a highly skilled software provider can automate 70% of this preparatory phase (whereas legacy solutions may have only gotten you 20% of the way).

It’s also important to recognize that the next generation of artificial intelligence has arrived with agentic AI, which can reason, learn and make decisions with minimal human intervention. It’s also a technology that’s virtually impossible to integrate with an outdated tech stack or patchwork system. 

But layering agents atop mature processes and smart integration puts a business in position for automation success and immediate return on investment. Imagine a coordinated framework that connects and synchronizes data, work rules-based processes and systems end to end—thus putting ineffective workarounds out of work. 

To be sure, many CPAs and accountants fear change. When RPA (robotic process automation) was introduced in the 2010s, panic ensued until accountants realized that a) it freed them up to do high-value work, and b) it couldn’t duplicate one characteristic of highly skilled people: sound professional judgement.

Meanwhile, the next generation of accountants are shunning the drudge work that previous entry-level professionals tackled. Odds are they already know what’s possible through an integrated digital system – in fact, colleges began offering master’s degrees in data and analytics in the 2010s – and will seek out opportunities with firms that know how to get the job done through AI. With this comes the promise of upskilling at speeds their predecessors could never have imagined.    

Certainly, the need for comprehensive automation will only grow. Even as organizations workloads increase, budgets rarely keep up, let alone inflows of fresh talent.

Yet smart, experienced software experts can fill the breach. Consider the track record of industry innovation: Over the years, adding machines, calculators, computers and RPA haven’t cast accountants aside. Only the roles and skill sets have changed and so it is with AI in lockstep with a good change management program.

As a recent Forrester report sums it up well: “AI isn’t just a tech investment—it’s a people strategy.” And just as people won’t go back to adding up by hand, they will welcome breakthrough technology when you give them a good why and a great product.

Max Schultz is the Group General Manager, Redwood Software. Connect with Max on LinkedIn.

Aaron Veach is the Executive Director, Finance Transformation at Redwood Software. Connect with Aaron on LinkedIn.


 [LC1]https://www.cpajournal.com/2025/08/15/the-accounting-profession-is-in-crisis-3/

Thanks for reading CPA Practice Advisor!

Subscribe for free to get personalized daily content, newsletters, continuing education, podcasts, whitepapers and more…



Source link

Continue Reading

Tools & Platforms

World Shipping Council Wants to Use AI to Better Cargo Safety

Published

on


The World Shipping Council (WSC) plans to use artificial intelligence to bolster cargo safety. 

The organization announced Monday that it had launched a new initiative, which it calls the Cargo Safety Program, with the goal of preemptively stopping dangerous goods from making it onto ships. The WSC said it will use AI to screen and inspect cargo before it’s loaded on the ship, with the intention of pinpointing misdeclared or undeclared shipments that would be of high risk to ship operators, companies’ cargo and the vessels themselves. 

Joe Kramek, president and CEO of the WSC, said he expects the measures to decrease the number of ship fires that occur. 

“We have seen too many tragic incidents where misdeclared cargo has led to catastrophic fires, including the loss of life,” Kramek said in a statement. “The WSC Cargo Safety Program strengthens the industry’s safety net by combining shared screening technology, common inspection standards, and real-world feedback to reduce risk.”

To date, the WSC said, a variety of ocean freight carriers that account for more than 70 percent of global twenty-foot equivalent unit (TEU) capacity have already joined the initiative. That includes Hapag-Lloyd, Ocean Network Express (ONE), Maersk, CMA CGM and others. 

The screening tool, which leverages technology built by the National Cargo Bureau (NCB), “scans millions of bookings in real time using keyword searches, trade pattern recognition and AI-driven algorithms to identify potential risks,” the WSC said. If the system finds risks or anomalies, it passes that feedback to a carrier; the carrier can then perform manual inspections of the cargo as needed. 

The WSC joins other third-party logistics players—albeit primarily on land—in leveraging AI for safety. Autonomous trucks typically leverage AI and machine learning-based systems to determine the safest route for the vehicles to take; paired with sensors and computer vision, these systems can also alert the driverless vehicle to on-road hazards, including tumultuous weather conditions. 

Additionally, some logistics players have started to leverage robotics in their facilities; increasingly, physical AI helps to ensure those robots don’t collide with or otherwise endanger the human workers they spend time alongside. That’s done both through real-world learnings and through digital twin simulations, which can train robots on millions of inputs far faster than developers could do if they had to manually simulate every situation in the real world. Physical AI is becoming increasingly important because of the rise of autonomous mobile robots (AMRs). Because AMRs move freely around warehouses, factories and other facilities, they have to be able to stop abruptly, 

AI-based monitoring, meant to flag hazards before injury, is also at play in many warehouses; companies like Voxel, which grabbed a Series B round in June, are able to interlink AI systems with existing security cameras and sensors to monitor employee safety. The company’s heat-mapping system uses inputs from the camera to determine high-risk zones in a facility, giving managers real-time suggestions on how to clear up hazards. The Port of Virginia uses such technology to make operations safer. 

The WSC did not clarify in its announcement whether it plans to use the new AI capabilities to stave off safety issues beyond fires; the trade group recently put out a report that said nearly 11.4 percent of inspected cargo shipments have deficiencies. That could mean they have undeclared or misdeclared goods in them; incorrect or mangled packaging; structural issues or wrong documents. 

At the time, the WSC said any of those issues have the propensity to cause major safety problems, including ship fires. Just days ago, right after the WSC issued its safety warning, more than 60 containers toppled from a cargo ship at the Port of Long Beach; the ship carried cargo for retailers like Costco, Target, Walmart and smaller shops. The cause of the incident has yet to be reported by officials. 

If leveraged appropriately, AI scanning technology like the kind the WSC has introduced could help mitigate incidents beyond fires. The organization said the initiative is an extension of its interest in improving safety outcomes for cargo carriers and noted that the Cargo Safety Program will “continue to evolve, with regular updates to its technology and standards to address new and emerging risks. 

Kramek said that, by doing so, he hopes the WSC can help move the needle on safety outcomes but noted that carriers and companies also bear responsibility for protecting workers, ships and cargo. 

“By working together and using the best available tools, we can identify risks early, act quickly and prevent accidents before they happen,” Kramek said in a statement. “The Cargo Safety Program is a powerful new layer of protection, but it does not replace the fundamental obligation shippers have to declare dangerous goods accurately. That is the starting point for safety, and it is required under international law.”



Source link

Continue Reading

Tools & Platforms

Connecticut Professors Fear Dependence, Cognitive Decline Over AI Use

Published

on


(TNS) — Lloydia Anderson has listened to the common complaint among many students her age: a lack of time to get anything done.

Between some juggling jobs and others prioritizing certain classes to get more free time, Anderson said she has heard other students at Central Connecticut State University talk about using ChatGPT to get assignments done.

“Some people use it for time management and some don’t really think an assignment is important,” she said, describing the apathy of some students to the AI chatbot’s increasing role in education.


Anderson said she deliberately does not use ChatGPT for fear that it would diminish her capacity to think. She is equally concerned about her job in the future being replaced by AI. Anderson, a junior at CCSU, is studying sociology and philosophy and hopes to advance to a higher level of education.

AI is changing the landscape of higher education as professors change curriculum and testing methods to try to ensure students are relying on their own thinking rather than AI. Several professors at CCSU who spoke with the Courant shared how they fear AI is causing a decline in cognitive ability and a dependence on technology to complete assignments. But the concern is more than that students are finding a shortcut around doing the work, it’s that students will no longer be able to think critically or perform the simplest of tasks without technological assistance.

In the same measure, AI education experts say AI is not going anywhere and that it is the responsibility of educators to ensure that students are taught to use it ethically and responsibly — to create a way to leverage its use to promote learning instead of replacing the brain’s cognitive thinking skills.

IDENTICAL ASSIGNMENTS, CITATION ERRORS

Teaching an online class over the summer that required students to connect course material to pop culture themes, associate philosophy professor Audra King knew something was awry when she discovered three of the students’ essays were identical in nature, using the same terminology and ideas.

King determined that the students, who did not know each other, used ChatGPT — a troubling trend that she is seeing more often these days.

“A lot of students and I want to say faculty too put a prompt in ChatGPT and have it spit back the answer,” said King. “It is making things harder. They already have a decreased attention span. They have lower critical thinking skills.”

And the authenticity of ChatGPT in some instances also remains a question.

Brian Matzke, digital humanities librarian at the Elihu Burritt Library, said once every couple weeks a student will come to the research desk looking for certain articles they have listed but none of them can be found because they don’t exist.

Ricardo Friaz, assistant professor of philosophy, said ChatGPT is “reproducing patterns of language that includes real citations and inaccurate citations.”

“It reproduces a lot of the biases that go into it,” Friaz said. “It takes from the corpus of knowledge and is reproducing what has been said.”

Vahid Behzadan is associate professor of computer science and data science at the University of New Haven and cofounder of the Connecticut AI Alliance, a consortium of 21 universities, industrial action groups and communities across the state. ChatGPT, Behzadan said, “doesn’t just string words together: It can follow a prompt with several coherent paragraphs, shifting smoothly across subjects and styles.”

“That means it demonstrates not only a strong command of language, but also a practical grasp of commonsense and specialized knowledge,” he said.

COGNITIVE DECLINE

Several studies have emerged showing the correlation between the use of ChatGPT and cognitive decline.

An MIT study released this past June included 54 participants who were assigned into three groups: an LLM (language-generating AI) group that uses ChatGPT, another that uses the Google search engine and a third nothing at all. The participants were required to write an essay.

The study used electroencephalography (EEG) to record the participants’ brain activity, according to information on the study.

Results from the study measuring brain activity over four months found that “LLM users consistently underperformed at neural, linguistic, and behavioral levels,” according to the study.

“These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning,” the study stated.

Another study from researchers at the University of Pennsylvania in 2024 found that “Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT,” according to the Hechinger Report.

Behzadan said the studies are limited and research is still in early stages as use of chatbots evolves. In August, ChatGPT had 700 million weekly users, a number that has quadrupled in the past year.

He emphasized the importance of cognitive fitness, which offers long-term benefits including general health, well-being and quality of life.

Tomas Portillo, a junior at CCSU studying mathematics and philosophy, said he does not use AI in his studies. But many students, he said, rely so heavily on it that they would rather ask ChatGPT for help than a professor.

King has taught philosophy for 17 years and has seen how students approach assignments.

“I do see less students thinking abstractly,” she said, explaining that years ago students would come to class having read as assigned article and ask questions.

“I am seeing less engagement and feedback,” she said. “It is the humanity, the originality and the individuality that is completely lost and vanished when we rely on ChatGPT.”

Matzke said AI really in itself is not what he would call his primary concern.

“If the concept of AI technology were to emerge in a world that already valued critical thinking and humanities, it would be a great opportunity but it is amplifying a lot of existing problems.”

OVERWORKED AND EXHAUSTED

King said students today are overworked and exhausted by juggling numerous priorities.

“They don’t have time and don’t have energy,” she said. “We are in a perfect storm of capitalism. There is also an attention span issue. TikTok and social media have made it easy to rely on outside sources.”

Matzke said that students are very outcome oriented and that they are good at regurgitating facts and writing in a style that is bullet pointed.

“But trying to identify the relationship between the concept and how X leads to Y is much more difficult,” he said.

Portillo said while he does not use AI in his studies, he can never seem to escape it because it’s pervasive in the algorithms on social media and in many facets of everyday life.

“I feel like people are more self-centered nowadays with all this access to technology,” he said. “It seems like social media is the medium between us that adds a layer of obfuscation.”

He also recalled how AI has changed the way classes are structured. Several professors said they have changed curriculum, requiring more in-class essays and quizzes.

“There are a lot more in-class assignments,” said Anderson.

MISUNDERSTANDING AI

Friaz is teaching a philosophy, research and writing course to teach students to research and write in a world where AI claims to be better at doing those things than humans, said that students are often not aware that they are “offloading their thinking” by putting their essays into ChatGPT before handing them in.

“They don’t think they are using AI,” he said. ”Writing is your thinking and they are not aware they are offloading their thinking. It makes it hard to talk about it.”

Friaz said the goal of his class is for “students to become strong researchers and writers by researching AI, experimenting with it and thinking critically about how it affects society.”

In today’s society, Friaz said, students feel pressure to spend time on what is most relevant to their jobs with the premise that “writing is not essential and they don’t have to do it and have to suffer through it.”

AI IS HERE TO STAY

When ChatGPT first became a mainstay at universities in 2022, many universities attempted to ban the use of AI, Behzadan said. But that backfired.

“Whether you are going to ban it or not everyone is going to use it,” Behzadan said, “it is inevitable.”

Behzadan said universities began trying to develop guidelines for ethical use of AI, like citing AI when it’s used.

With AI not going away anytime soon, he said students need to learn how to use AI to be able to adapt to changing skills and job requirements.

“AI is here to stay,” he said. “It has become more advanced and we need to adjust and evolve our curriculum to embrace AI while also enabling our students to make beneficial and ethical use of the AI technology.”

Friaz said the key is “teaching it as a tool to aid you rather than something that thinks for you.”

King said she worries about the future and the continuing emergence of AI.

“The more students are relying on technology to think for them, the less they are going to connect with and grow those emotional connections in the real world,” she said.

©2025 Hartford Courant. Distributed by Tribune Content Agency, LLC.





Source link

Continue Reading

Trending