Connect with us

AI Insights

When Artificial Intelligence Goes Nuts – Commentary Magazine

Published

on


In 2016, engineers at OpenAI spent months teaching artificial intelligence systems to play video games. Or, to be more precise, they spent months watching their AI agents learn to play video games. This was back in the days before artificial intelligence was a subject of nonstop hype and anxiety. OpenAI had been founded by Elon Musk, Sam Altman, and other tech savants just a year before and still operated more like a think tank than like the tech colossus it was to become.

The researchers were training their system on a video game called CoastRunners, in which a player controls a motorboat that races other boats around a track and picks up extra points as it hits targets along the route. The OpenAI team was using an approach called reinforcement learning, or RL. Instead of providing the agent with a full set of instructions, as one would in a traditional computer program, the researchers allowed it to figure out the game through trial and error. The RL agent was given a single overarching incentive, or a “reward function” in AI parlance: to rack up as many points as possible. So any time it stumbled on moves that generated points, it would then strive to replicate those winning moves. The researchers assumed that, as the agent bumbled around the track, it would begin learning strategies that would ultimately help it zoom expertly to the finish line.

That’s not what happened. Instead, as the RL agent steered its boat chaotically around the track, it eventually found a sheltered lagoon containing three targets. Soon the agent began piloting the boat in an endless loop around the lagoon, bouncing off bulkheads and other vessels and smashing the targets again and again, generating points galore. It turns out the CoastRunners game doesn’t require the player to cross the finish line to win, so the RL agent didn’t bother with that nicety. In a report titled “Faulty Reward Functions in the Wild,” the researchers wrote, “Despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track, our agent manages to achieve a higher score using this strategy than is possible by completing the course in the normal way.” In fact, through its out-of-the-box strategy of not trying to win the race, the AI system outscored human players by 20 percent.

The YouTube clip showing the AI player’s maniacal lagoon loop is hilarious. But it is also a little scary. OpenAI wasn’t just building AI systems to beat people at video games. They and others were developing AI to outperform humans at myriad tasks, including many in the unforgiving, non-virtual world. Today, AI systems are involved in driving cars and trucks, running factories, diagnosing patients, and other high-stakes enterprises. And for the most part, they do these things exceptionally well. But there is always an element of uncertainty, as this early experiment revealed. The OpenAI researchers were learning that it is difficult to define the rewards that will tell an agent exactly what we want it to do—or not to do. This faulty-reward problem can lead to “undesired or even dangerous actions,” they wrote. “More broadly it contravenes the basic engineering principle that systems should be reliable and predictable.”

Imagine an AI agent piloting a boat in the real world—say, a tugboat pushing barges. (That day will be here soon, as multiple companies are developing AI-assisted autonomous navigation for ships.) I’m sure these systems will work well almost all the time. But because we don’t know whether we’ve thought of every conceivable reward function, we can’t be certain how our tugboat AI pilot will behave in every situation. Perhaps we’ve set the key goal—promptly deliver barges to X destination—but did we remember to make it clear that plowing over stray kayakers in your path is a no-no?

This is not an argument against using AI in high-risk settings. Everyone developing AI systems today knows about the reward-function problem and works to minimize it. Still, a small degree of uncertainty is inherent in all AI systems. Because these systems essentially teach themselves, we can never know exactly why an AI agent takes a certain action. It’s a black box. Unlike traditional computers, which we program to follow our instructions precisely, AI algorithms evolve over time as they grind through mountains of data. Their behavior emerges. “It’s like we’re not programming anymore,” data scientist Zeynep Tufekci said in a TED Talk recorded soon after the OpenAI study. “We’re growing intelligence that we don’t truly understand.”

Long before the AI era, engineers were learning to be aware of what became known as “emergent behaviors.” When London’s Millennium footbridge opened in 2000, its designers were dismayed to learn that the bridge deck naturally swayed side to side as pedestrians crossed it. The walkers in turn unconsciously adjusted their strides to compensate for the bridge’s movement. That created a feedback loop that drove the oscillations higher still until it became hard to walk straight. The wobbly footbridge had to be closed and redesigned.

Most man-made disasters result from such unexpected interactions between humans and complex technology. Digital technology tends to make complex systems faster and more efficient, but also more susceptible to these unplanned emergent behaviors. For example, in the Flash Crash of 2010, high-speed-trading algorithms interacted in a massive sell-off. The Dow lost almost 9 percent of its value in minutes, only to recover almost as quickly.

OpenAI’s CoastRunners experiment took place in the safety of a virtual lab. Today, we are all participants in the experiment to see what emergent behaviors lurk within our AI systems. Anyone who has used Large Language Model Chatbots, such as OpenAI’s ChatGPT, knows these bots are prone to wild hallucinations and a worrisome tendency to tell users just what they want to hear. Lawyers and scholars have been caught submitting AI-generated documents that cite nonexistent legal cases or research papers. Chatbots have encouraged depressed people to commit suicide. Recently, after Elon Musk touted an update to his Grok chatbot, the AI system went on a wild tear, repeating anti-Semitic memes, calling itself “MechaHitler,” and generating obscene rants about sexually violating a prominent online commentator. Yikes.

AI systems will keep getting better, but they may never fully banish the underlying uncertainties that can lead to the undesired and dangerous actions OpenAI’s researchers warned us about. So does that mean we should try to shut down AI platforms, or maybe set up a government bureaucracy in charge of AI safety? I say no. Trying to hobble or ban a breakthrough technology is a fool’s errand. And I fear anything beyond simple regulation is all too likely to backfire.

Instead, lawmakers, businesses, and individuals should approach AI with a mix of optimism and caution. Other potentially hazardous technologies, like aviation, chemical manufacturing, or nuclear power, make safe and beneficial contributions to our society. But this didn’t happen because we ignored their risks. Engineers and industry experts (and, yes, regulators in some cases) have spent decades studying accidents and improving safeguards. Rolling out AI will require an even higher level of vigilance.

I believe that, on the whole, AI will vastly improve efficiency, outcomes, and even safety in most industries. But right now, too many businesses are rushing to integrate AI systems without due diligence. AI advocates should instead take a page from other high-risk industries and focus not just on potential benefits, but also on the potential risks lurking in the algorithms. Well-integrated AI systems should include digital firewalls and off-ramps, not to mention OFF switches. It is especially important to keep human beings in the loop for critical functions. Humans may be forgetful and fallible. But we have a real-world common sense that AI systems still lack.

Future AI-assisted tugboats will probably run over kayakers less often than today’s human-piloted ones do. But they could also, just maybe, make errors we can’t conceive of. Our smartest course forward is to build the best AI navigation systems we can. But let’s keep a human in the pilot house for now. While AI systems make great assistants, we should never grow too trusting.

Photo: Justin Sullivan/Getty Images

We want to hear your thoughts about this article. Click here to send a letter to the editor.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

How Artificial Intelligence is Redefining Business Process Automation

Published

on


In today’s fast-paced economy, businesses are under constant pressure to operate more efficiently while reducing costs and improving customer experiences. Automation has long been a solution, but traditional methods such as simple scripts or rigid workflows often fall short in terms of adaptability and intelligence. This is where artificial intelligence comes into play. By partnering with an Artificial Intelligence Development Company, organizations can unlock new opportunities for smarter decision-making, streamlined operations, and scalable growth.

The growing interest in AI-driven automation reflects its role as a key enabler of digital transformation. Unlike conventional automation, AI systems can analyze large datasets, learn from patterns, and make predictions that allow businesses to stay competitive in increasingly dynamic markets.

Why AI for Business Process Automation

Traditional automation methods—such as scripts or Robotic Process Automation (RPA)—are useful for handling repetitive, rule-based tasks. However, they lack flexibility and cannot adapt to new or changing conditions without manual intervention. Artificial intelligence takes automation a step further by enabling systems to learn, adapt, and improve over time.

Through machine learning and advanced data analytics, AI can identify hidden patterns, make predictions, and support real-time decision-making. This makes it possible not only to automate processes but also to optimize them dynamically, driving more value than traditional approaches.

Key Areas of Application

Finance
AI enables faster and more secure payment processing, advanced transaction analysis, and fraud detection systems that continuously learn to recognize suspicious patterns.

Marketing and Sales
From demand forecasting and personalized customer experiences to intelligent chatbots, AI helps companies better understand their audience and increase conversion rates.

Manufacturing and Logistics
AI-powered tools streamline supply chain management, predict equipment maintenance needs, and reduce downtime, ensuring smoother operations and higher efficiency.

Human Resources (HR)
Recruitment processes are enhanced through automated resume screening, predictive analysis of employee retention, and data-driven insights for workforce planning.

Advantages of Implementation

The implementation of AI in business processes brings several clear advantages. One of the most significant is cost reduction: by automating repetitive, labor-intensive tasks, companies can cut manual rework and optimize resource allocation, which lowers operating expenses without sacrificing quality. AI also accelerates processes, as models are capable of handling large data streams in near real time.

This speed translates into faster approvals, more efficient routing, more accurate forecasting, and quicker customer responses, all of which shorten cycle times. Another key benefit is error minimization. With advanced pattern recognition and anomaly detection, AI reduces human error, ensures data consistency, and helps stabilize performance metrics across workflows.

Finally, AI offers unmatched flexibility and scalability. Systems continuously learn from new data, allowing them to adapt to changing rules and business volumes, while cloud-native deployments make it possible to scale operations seamlessly as demand increases.

Potential Challenges

Despite these benefits, businesses face certain challenges when adopting AI automation. Costs and timelines are among the first hurdles. The discovery phase, data preparation, model training, and integration require significant upfront investment, and success often depends on a phased delivery approach to manage risk.

Data quality is another critical factor. If the available data is incomplete, biased, or siloed, the outcomes will inevitably suffer. Strong governance, robust cleaning pipelines, and continuous monitoring are necessary to maintain reliable results. Ethical and legal considerations must also be addressed.

Organizations need to ensure that their AI solutions operate with transparency, fairness, and respect for privacy, while remaining fully compliant with regulatory standards and internal policies.

Conclusion

AI-driven automation is now a core lever of competitiveness, improving speed, accuracy, and margins while enabling adaptive operations. Start small, pick a high-impact process, validate with a pilot, then scale iteratively with robust data governance and clear ROI checkpoints.

















Source link

Continue Reading

AI Insights

Local Events | coastsidenews.com

Published

on


We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, contact customerservice@coastsidenews.com or call (650) 726-4424.



Source link

Continue Reading

AI Insights

AI to Disrupt Stocks, Force Investors to adopt Bitcoin — Analyst

Published

on


Bitcoin (BTC) will be a better investment than stocks in the coming decades due to artificial intelligence speeding up innovation cycles, making public companies inefficient investment vehicles, analyst and investor Jordi Visser predicted.

“If the innovation cycle is now sped up to weeks, we are in a video game where your company never hits escape velocity, and in that world, how do you invest? You don’t invest, you trade,” Visser told Anthony Pompliano on Saturday. He also said:

“Bitcoin is a belief. Beliefs last longer than ideas. There are no companies in the S&P 500 from 100 BC; gold has been around since then. Bitcoin will be around for a long, long time. It’s a belief at this point, and people can fight it, but it’s going to be around. 

I think you want to start shorting ideas, and you want to be long beliefs,” Visser continued, adding that AI may compress what normally would have taken 100 years to accomplish in only five years. 

Visser makes his predictions about the future of Bitcoin and the stock market in the AI age. Source: Anthony Pompliano

The prediction sheds light on the potential future of finance and capital structures, as artificial intelligence and blockchain technology disrupt the legacy financial system, driving more value and participants to the digital economy.

Related: Bitcoin faces a fee crisis that threatens network security: Can BTCfi help?

Eric Trump predicts $1M BTC as public companies adopt crypto

Companies continue buying crypto and Bitcoin directly as treasury reserve assets, often rebranding as pure crypto treasury plays and dumping their legacy business models.

These legacy financial vehicles provide equity investors with indirect exposure to BTC and crypto, while siphoning funds from traditional capital markets to digital finance.

Eric Trump predicted Bitcoin would hit $1 million per coin, telling the audience at the Bitcoin Asia 2025 conference in Hong Kong that nation-states, wealthy families, and public companies are all buying BTC.

Bitcoin’s market capitalization is over $2.1 trillion at the time of this writing, with some analysts predicting that it will overtake gold’s market cap over the coming decades.

The digital asset’s cross-border nature and ability to earn yield through deployment in decentralized finance (DeFi) applications give it a competitive advantage over gold as a store of value, some crypto industry executives have argued. 

Magazine: Danger signs for Bitcoin as retail abandons it to institutions: Sky Wee