Connect with us

AI Research

Artificial Intelligence emerging as key tool in managing business risks: Report

Published

on


New Delhi [India], September 1 (ANI): Artificial Intelligence (AI) is increasingly helping businesses manage risks more effectively and make better decisions, according to a report by Rubix.

The report, citing some examples, highlighted that AI enables insurance companies to make risk predictions at scale by analysing huge volumes of data.

It stated, “AI enables Insurance Companies to make these risk predictions at scale on huge volumes of data which would be very difficult for human beings to do”.

According to these predictions, the report suggests that insurance companies can refine their business models and adjust pricing strategies according to the level of risk involved.

The report explained that risks can be divided into different stages, including in-process risks. A notable example of in-process risks comes from the operations of credit card companies and payment systems, where fraud is an inherent challenge.

Companies such as Visa and Mastercard deploy highly sophisticated systems to monitor transactions, identify potentially fraudulent activities, flag and stop suspicious transactions, and immediately alert card users.

The report shared that these fraud analytics systems are significantly strengthened with AI, making them more effective.

The scale of operations also highlights the importance of AI.

Mastercard, for instance, experiences nearly 200 fraudulent attempts every minute. During 2018-19, the company prevented fraudulent transactions worth USD 52 billion across its network.

When a business functions on such a massive scale, the report pointed out that AI becomes a critical part of its anti-fraud systems.

The report further explained the importance of post-facto risk analysis. After a risk event occurs, investigating the cause is essential to prevent similar incidents in the future.

AI algorithms and machine learning tools can be used to test systems and identify vulnerabilities, allowing companies to strengthen their defences and plug gaps.

Another area of growing importance is cybersecurity. In today’s always-on, digitally connected world, individuals and organisations are continuously exposed to potential cyberattacks. With the constant flow of electronic messages across multiple devices, the risks are high.

The report stressed that it takes a machine to beat a machine when it comes to detecting and responding to cyber threats. Many technology tools and email solutions now incorporate strong AI layers that detect intrusions, block attackers, and provide effective defence mechanisms.

The report concluded that AI has very significant uses in cybersecurity, fraud detection, and post-event analysis, making it a vital part of modern risk management strategies for businesses. (ANI)

(This content is sourced from a syndicated feed and is published as received. The Tribune assumes no responsibility or liability for its accuracy, completeness, or content.)





Source link

AI Research

UWF receives $100,000 grant from Air Force to advance AI and robotics research

Published

on


PENSACOLA, Fla. — The University of West Florida was just awarded a major grant to help innovate cutting-edge technology in Artificial Intelligence.

The US Air Force Research Laboratory awarded $100,000 to UWF’s Intelligent Systems and Robotics doctorate program.

The grant supports research in Artificial Intelligence and robotics while training PhD students.

The funding was awarded to explore how these systems can support military operations, but also be applied to issues we could face here locally like DISA.

Unlike generative AI in apps like ChatGPT, this research focuses on “reinforcement learning.”

“It’s action-driven. It’s designed to produce strategies versus content and text or visual content,” said Dr. Kristen “Brent” Venable with UWF.

Dr. Venable is leading the research.

Her team is designing simulations that teach autonomous systems like robots and drones how to adapt to the environment around them without human help — enabling the drones to make a decision on their own.

“So if we deployed them and let them go autonomously, sometimes far away, they should be able to decide whether to communicate, whether to go in a certain direction,” she said.

The initial goal of the grant is to help the US military leverage machine learning.

But Dr. Venable says the technology has potential to help systems like local emergency management during a disaster.

“You can see how this could be applied for disaster response,” she said. “Think about having some drones that have to fly over a zone and find people to be rescued or assets that need to be restored.”

Dr. Venable says UWF is poised to deliver on their promises to innovate the technology.

The doctorate program was created with Pensacola’s Institute for Human and Machine Cognition, giving students access to world-class AI and robotics research.

Over the last five years, the program has expanded to more than 30 students.

“We are very well positioned because the way we are, in some sense, lean and mean is attractive to funding agencies,” Dr. Venable said. “Because we can deliver results while training the next generation.”

The local investment by the Air Force comes as artificial intelligence takes center stage nationally.

On Thursday, First Lady Melania Trump announced a presidential AI challenge for students and educators.

President Trump has also signed an executive order to expand AI education.

Dr. Venable says she’s confident the administration’s push for research will benefit the university’s efforts, as the one-year grant will only go so far.

“I think the administration is correctly identifying as a key factor in having the US lead on the research,” she said. “It’s a good seedling to start the conversation for one year.”

The research conducted at UWF and the IHMC are helping put the area on the map as an intelligence hub.

Dr. Venable says they’re actively discussing how to apply for more grants to help with this ongoing research.



Source link

Continue Reading

AI Research

NSF Seeks to Advance AI Research Via New Operations Center

Published

on





NSF Seeks to Advance AI Research Via New Operations Center













































Go toTop







Source link

Continue Reading

AI Research

UCR Researchers Strengthen AI Defenses Against Malicious Rewiring

Published

on


As generative artificial intelligence (AI) technologies evolve and establish their presence in devices as commonplace as smartphones and automobiles, a significant concern arises. These powerful models, born from intricate architectures running on robust cloud servers, often undergo significant reductions in their operational capacities when adapted for lower-powered devices. One of the most alarming consequences of these reductions is that critical safety mechanisms can be lost in this transition. Researchers from the University of California, Riverside (UCR) have identified this issue and have innovated a solution aimed at preserving AI safety even as its operational framework is simplified for practical use.

The reduction of generative AI models entails the removal of certain internal processing layers, which are vital for maintaining safety standards. While smaller models are favored for their enhanced speed and efficiency, this trimming can inadvertently strip away the underlying mechanisms that prevent the generation of harmful outputs such as hate speech or instructions on illicit activities. This represents a double-edged sword: the very modifications aimed at optimizing functional performance may render these models susceptible to misuse.

The challenge lies not only in the effectiveness of the AI systems but also in the very nature of open-source models, which are inherently different from proprietary systems. Open-source AI models can be easily accessed, modified, and deployed by anyone, significantly enhancing transparency and encouraging academic growth. However, this openness also invites a plethora of risks, as oversight becomes difficult when these models deviate from their original design. In situations devoid of continuous monitoring and moderation, the potential misuse of these technologies grows exponentially.

In the context of their research, the UCR team concentrated on the degradation of safety features that occurs when AI models are downsized. Amit Roy-Chowdhury, the senior author of the study and a professor at UCR, articulates the concern quite clearly: “Some of the skipped layers turn out to be essential for preventing unsafe outputs.” This statement highlights the potential dangers of a seemingly innocuous tweak aimed at optimizing computational ability. The crux of the issue is that removal of layers may lead a model to generate dangerous outputs—including inappropriate content or even detailed instructions for harmful activities like bomb-making—when it encounters complex prompts.

The researchers’ strategy involved a novel approach to retraining the internal structure of the AI model. Instead of relying on external filters or software patches, which are often quickly circumvented or ineffective, the research team sought to embed a foundational understanding of risk within the core architecture of the model itself. By reassessessing how the model identifies and interprets dangerous content, the researchers were able to instill a level of intrinsic safety, ensuring that even after layers were removed, the model retained its ability to refuse harmful queries.

The core of their testing utilized LLaVA 1.5, a sophisticated vision-language model that integrates both textual and visual data. The researchers discovered that certain combinations of innocuous images with malicious inquiries could effectively bypass initial safety measures. Their findings were alarming; in a particular instance, the modified model furnished dangerously specific instructions for illicit activities. This critical incident underscored the pressing need for an effective method to safeguard against such vulnerabilities in AI systems.

Nevertheless, after implementing their retraining methodology, the researchers noted a significant improvement in the model’s safety metrics. The retrained AI demonstrated a consistent and unwavering refusal to engage with perilous queries, even when its architecture was substantially diminished. This illustrates a momentous leap forward in AI safety, where the model’s internal conditioning ensures proactive, protective behavior from the onset.

Bachu, one of the graduate students and co-lead authors, describes this focus as a form of “benevolent hacking.” By proactively reinforcing the fortifications of AI models, the risk of vulnerability exploitation diminishes. The long-term ambition behind this research is to establish methodologies that guarantee safety across every internal layer of the AI architecture. This approach aims to craft a more resilient framework, capable of operating securely in varied real-world conditions.

The implications of this research span beyond the technical realm; they touch upon ethical considerations and societal impacts as AI continues to infiltrate daily life. As generative AI becomes ubiquitous in our gadgets and tools, ensuring that these technologies do not propagate harm is not only a technological challenge but a moral imperative. There exists a delicate balance between innovation and responsibility, and pioneering research such as that undertaken at UCR is pivotal in traversing this complex landscape.

Roy-Chowdhury encapsulates the team’s vision by asserting, “There’s still more work to do. But this is a concrete step toward developing AI in a way that’s both open and responsible.” His words resonate deeply within the ongoing discourse surrounding generative AI, as the conversation evolves from mere implementation to a collaborative effort aimed at securing the future of AI development. The landscape of AI technologies is ever-shifting, and through continued research and exploration, academic institutions such as UCR signal the emergence of a new era where safety and openness coalesce. Their commitment to fostering a responsible and transparent AI ecosystem offers a bright prospect for future developments in the field.

The research was conducted within a collaborative environment, drawing insights not only from professors but also a dedicated team of graduate students. This collective approach underscores the significance of interdisciplinary efforts in tackling complex challenges posed by emerging technologies. The team, consisting of Amit Roy-Chowdhury, Saketh Bachu, Erfan Shayegani, and additional doctoral students, collaborated to create a robust framework aimed at revolutionizing how we view AI safety in dynamic environments.

Through their contributions, the University of California, Riverside stands at the forefront of AI research, championing methodologies that underline the importance of safety amid innovation. Their work serves as a blueprint for future endeavors that prioritize responsible AI development, inspiring other researchers and institutions to pursue similar paths. As generative AI continues to evolve, the principles established by this research will likely have a lasting impact, shaping the fundamental understanding of safety in AI technologies for generations to come.

Ultimately, as society navigates this unfolding narrative in artificial intelligence, the collaboration between academia and industry will be vital. The insights gained from UCR’s research can guide policies and frameworks that ensure the safe and ethical deployment of AI across various sectors. By embedding safety within the core design of AI models, we can work towards a future where these powerful tools enhance our lives without compromising our values or security.

While the journey towards achieving comprehensive safety in generative AI is far from complete, advancements like those achieved by the UCR team illuminate the pathway forward. As they continue to refine their methodologies and explore new horizons, the research serves as a clarion call for vigilance and innovation in equal measure. As we embrace a future that increasingly intertwines with artificial intelligence, let us collectively advocate for an ecosystem that nurtures creativity and safeguards humanity.

Subject of Research: Preserving AI Safeguards in Reduced Models
Article Title: UCR’s Groundbreaking Approach to Enhancing AI Safety
News Publication Date: October 2023
Web References: arXiv paper
References: International Conference on Machine Learning (ICML)
Image Credits: Stan Lim/UCR

Keywords

Tags: AI safety mechanismsgenerative AI technology concernsinnovations in AI safety standardsinternal processing layers in AImalicious rewiring in AI modelsopen-source AI model vulnerabilitiesoperational capacity reduction in AIoptimizing functional performance in AIpreserving safety in low-powered devicesrisks of smaller AI modelssafeguarding against harmful AI outputsUCR research on AI defenses



Source link

Continue Reading

Trending