Connect with us

AI Research

Expert Analysis of Ethical Issues in Applying Artificial Intelligence to Cybersecurity – Latest Hacking News

Published

on


Artificial intelligence (AI) is developing at a rapid pace; in a short period of time, it has led to radical changes even in the field of cybersecurity. Hackers now use AI-based tools to automate vulnerability scanning, predict attack vectors, and create threats faster than ever before. This synergy between human ingenuity and machine learning is transforming the field. It has sparked debate among people: If fraudsters with no ethical constraints use AI for attacks, what should the ethical constraints be when using AI to build defense systems?

Results of using AI in cybersecurity

The successes of hackers are evident. According to the IBM X-Force report, cyberattacks on critical infrastructure — especially SCADA systems and telecommunications — have increased by 30%. This includes DDoS attacks, malware infiltration, and compromise of control systems. In the first quarter of 2025, a DDoS botnet of 1.33 million devices was discovered, which is six times larger than the largest botnet in 2024. Cybersecurity departments of companies that test IT systems for penetration in accordance with ISO 27001, NIST, and CIS standards are increasingly doing so not annually or weekly, but daily.

Neutralizing AI threats in cybersecurity using ethical methods

The active use of AI has led to the need for insurance against AI-driven threats. Moreover, cyber insurance has recently become a strategic requirement for businesses, especially in the finance, healthcare, and critical infrastructure sectors. For example, international insurers such as AIG, AXA, Zurich, or Chubb require clients to demonstrate regular ethical hacking assessments.

Insurers believe that the effectiveness of ethical hacking in testing companies is estimated at 98%. In other words, without regular vulnerability testing, the market sees no chance of staying ahead of hackers and protecting systems.

According to the U.S. Department of Justice’s updated guidance, ethical hacking is legally protected when done with consent, and insurers increasingly rely on it to assess enterprise resilience. Ethical hacking has become a basic requirement for secure business operations in the US, both for Fortune 500 companies and startups. Facebook and its affiliated companies allocate the largest budget for these purposes. Its total cybersecurity spend is estimated in the billions, with ethical hacking playing a critical role in that ecosystem.

In Israel, ethical hacking has also become the basis of its updated National Cybersecurity Strategy for 2025–2028, the Israel National Cyber Directorate.

Ethical issues

Opponents of ethical hacking argue that the line between “white” and “black” hackers is thin and sometimes blurred.

Hackers may discover vulnerabilities that go beyond what has been agreed upon. Should they report it? Use it to their advantage? Ignore it? This gray area is fraught with ethical issues.

Ethical hacking standards are already being systematized as part of ethical hacking certifications in the US and EU. But opponents believe that ethical hacking skills and tools can be used for malicious attacks.

In addition, penetration tests can inadvertently lead to system failure, data corruption, or disclosure of confidential information, especially in real-world conditions. Such access to personal data raises questions about user consent and privacy protection laws.

Gevorg Tadevosyan Exclusive: Expert Opinion

Gevorg Tadevosyan, a cybersecurity expert from the Israeli company NetSight One, shared his opinion on this debate. Gevorg, a graduate of Bar-Ilan University with a deep understanding of cybersecurity protocols and ethical hacking, emphasized the importance of developing a balanced approach. He agreed that artificial intelligence has improved certain aspects of cyber defense, such as speed and efficiency, but warned against the dangers of using artificial intelligence for offensive purposes and urged the implementation of ethical hacking as one of the main protective measures. Gevorg therefore calls for the development of a comprehensive framework for the use of artificial intelligence in cybersecurity and the resolution of existing ethical issues. This requires the creation of a clear legal basis for ethical hacking:

  • Bring the ethical hacking business out of the legal gray area and create a unified state licensing and supervision system;
  • Pass a law on mandatory disclosure of information about all vulnerabilities.
  • Make it mandatory by law to eliminate the consequences after penetration testing.
  • Strengthen privacy protection and develop clear rules for the processing of personal data during penetration testing.
  • Legislate the legality of testing cross-border systems and eliminating threats.

Reasons to Address Ethical Constraints.

Gevorg’s vision adopts a scientific approach to urgently address all ethical constraints in the application of AI for cybersecurity protection. He asserts that AI can indeed enhance protective measures, but this cannot be achieved under existing ethical constraints. We need new regulations and laws governing the use of AI for penetration testing to prevent AI from being misused and to avoid consequences, especially unintended ones. National cyber resilience will only benefit from the use of AI with clear ethical rules, and the risks to the community will be reduced.

Сonclusion

The interplay between technology and ethics in the field of AI and cybersecurity is complex. While AI has great potential to improve cybersecurity, its use for offensive purposes requires caution. Insightful experts such as Gevorg Tadevosyan of NetSight One also note the urgent need to address contemporary ethical issues surrounding the use of AI in cybersecurity. By addressing all ethical considerations, the cybersecurity community can optimally leverage AI to pave the way for a more secure digital environment in Israel.

JPost.com is grateful for the professional advice provided by Gevorg Tadevosyan in preparing this article.

Get real time update about this post category directly on your device, subscribe now.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

AI Research

An Aerial Remote Sensing Foundation Model With Affine Transformation Contrastive Learning


View a PDF of the paper titled RingMo-Aerial: An Aerial Remote Sensing Foundation Model With Affine Transformation Contrastive Learning, by Wenhui Diao and 10 other authors

View PDF
HTML (experimental)

Abstract:Aerial Remote Sensing (ARS) vision tasks present significant challenges due to the unique viewing angle characteristics. Existing research has primarily focused on algorithms for specific tasks, which have limited applicability in a broad range of ARS vision applications. This paper proposes RingMo-Aerial, aiming to fill the gap in foundation model research in the field of ARS vision. A Frequency-Enhanced Multi-Head Self-Attention (FE-MSA) mechanism is introduced to strengthen the model’s capacity for small-object representation. Complementarily, an affine transformation-based contrastive learning method improves its adaptability to the tilted viewing angles inherent in ARS tasks. Furthermore, the ARS-Adapter, an efficient parameter fine-tuning method, is proposed to improve the model’s adaptability and performance in various ARS vision tasks. Experimental results demonstrate that RingMo-Aerial achieves SOTA performance on multiple downstream tasks. This indicates the practicality and efficacy of RingMo-Aerial in enhancing the performance of ARS vision tasks.

Submission history

From: Tong Ling [view email]
[v1]
Fri, 20 Sep 2024 10:03:14 UTC (36,295 KB)
[v2]
Mon, 31 Mar 2025 09:07:12 UTC (30,991 KB)
[v3]
Thu, 29 May 2025 14:03:42 UTC (13,851 KB)
[v4]
Tue, 16 Sep 2025 16:47:46 UTC (15,045 KB)



Source link

Continue Reading

Trending