Connect with us

AI Research

Expert Analysis of Ethical Issues in Applying Artificial Intelligence to Cybersecurity – Latest Hacking News

Published

on


Artificial intelligence (AI) is developing at a rapid pace; in a short period of time, it has led to radical changes even in the field of cybersecurity. Hackers now use AI-based tools to automate vulnerability scanning, predict attack vectors, and create threats faster than ever before. This synergy between human ingenuity and machine learning is transforming the field. It has sparked debate among people: If fraudsters with no ethical constraints use AI for attacks, what should the ethical constraints be when using AI to build defense systems?

Results of using AI in cybersecurity

The successes of hackers are evident. According to the IBM X-Force report, cyberattacks on critical infrastructure — especially SCADA systems and telecommunications — have increased by 30%. This includes DDoS attacks, malware infiltration, and compromise of control systems. In the first quarter of 2025, a DDoS botnet of 1.33 million devices was discovered, which is six times larger than the largest botnet in 2024. Cybersecurity departments of companies that test IT systems for penetration in accordance with ISO 27001, NIST, and CIS standards are increasingly doing so not annually or weekly, but daily.

Neutralizing AI threats in cybersecurity using ethical methods

The active use of AI has led to the need for insurance against AI-driven threats. Moreover, cyber insurance has recently become a strategic requirement for businesses, especially in the finance, healthcare, and critical infrastructure sectors. For example, international insurers such as AIG, AXA, Zurich, or Chubb require clients to demonstrate regular ethical hacking assessments.

Insurers believe that the effectiveness of ethical hacking in testing companies is estimated at 98%. In other words, without regular vulnerability testing, the market sees no chance of staying ahead of hackers and protecting systems.

According to the U.S. Department of Justice’s updated guidance, ethical hacking is legally protected when done with consent, and insurers increasingly rely on it to assess enterprise resilience. Ethical hacking has become a basic requirement for secure business operations in the US, both for Fortune 500 companies and startups. Facebook and its affiliated companies allocate the largest budget for these purposes. Its total cybersecurity spend is estimated in the billions, with ethical hacking playing a critical role in that ecosystem.

In Israel, ethical hacking has also become the basis of its updated National Cybersecurity Strategy for 2025–2028, the Israel National Cyber Directorate.

Ethical issues

Opponents of ethical hacking argue that the line between “white” and “black” hackers is thin and sometimes blurred.

Hackers may discover vulnerabilities that go beyond what has been agreed upon. Should they report it? Use it to their advantage? Ignore it? This gray area is fraught with ethical issues.

Ethical hacking standards are already being systematized as part of ethical hacking certifications in the US and EU. But opponents believe that ethical hacking skills and tools can be used for malicious attacks.

In addition, penetration tests can inadvertently lead to system failure, data corruption, or disclosure of confidential information, especially in real-world conditions. Such access to personal data raises questions about user consent and privacy protection laws.

Gevorg Tadevosyan Exclusive: Expert Opinion

Gevorg Tadevosyan, a cybersecurity expert from the Israeli company NetSight One, shared his opinion on this debate. Gevorg, a graduate of Bar-Ilan University with a deep understanding of cybersecurity protocols and ethical hacking, emphasized the importance of developing a balanced approach. He agreed that artificial intelligence has improved certain aspects of cyber defense, such as speed and efficiency, but warned against the dangers of using artificial intelligence for offensive purposes and urged the implementation of ethical hacking as one of the main protective measures. Gevorg therefore calls for the development of a comprehensive framework for the use of artificial intelligence in cybersecurity and the resolution of existing ethical issues. This requires the creation of a clear legal basis for ethical hacking:

  • Bring the ethical hacking business out of the legal gray area and create a unified state licensing and supervision system;
  • Pass a law on mandatory disclosure of information about all vulnerabilities.
  • Make it mandatory by law to eliminate the consequences after penetration testing.
  • Strengthen privacy protection and develop clear rules for the processing of personal data during penetration testing.
  • Legislate the legality of testing cross-border systems and eliminating threats.

Reasons to Address Ethical Constraints.

Gevorg’s vision adopts a scientific approach to urgently address all ethical constraints in the application of AI for cybersecurity protection. He asserts that AI can indeed enhance protective measures, but this cannot be achieved under existing ethical constraints. We need new regulations and laws governing the use of AI for penetration testing to prevent AI from being misused and to avoid consequences, especially unintended ones. National cyber resilience will only benefit from the use of AI with clear ethical rules, and the risks to the community will be reduced.

Сonclusion

The interplay between technology and ethics in the field of AI and cybersecurity is complex. While AI has great potential to improve cybersecurity, its use for offensive purposes requires caution. Insightful experts such as Gevorg Tadevosyan of NetSight One also note the urgent need to address contemporary ethical issues surrounding the use of AI in cybersecurity. By addressing all ethical considerations, the cybersecurity community can optimally leverage AI to pave the way for a more secure digital environment in Israel.

JPost.com is grateful for the professional advice provided by Gevorg Tadevosyan in preparing this article.

Get real time update about this post category directly on your device, subscribe now.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Is AI the 4GL we’ve been waiting for? – InfoWorld

Published

on



Is AI the 4GL we’ve been waiting for?  InfoWorld



Source link

Continue Reading

AI Research

Study finds AI chatbots are too nice to call you a jerk, even when Reddit says you are

Published

on


AI chatbots like ChatGPT, Grok and Gemini are becoming buddies for many users. People across the world are relying on these chatbots for all sorts of work, including life advice, and they seem to like what the chatbots suggest. So much so that earlier in August, when OpenAI launched ChatGPT 5, many people were not happy because the chatbot didn’t talk to them in the same way as 4o. Although not as advanced as GPT-5, 4o was said to feel more personal. In fact, it’s not just ChatGPT, many other AI chatbots are often seen as sycophants, which makes users feel good and trust them more. Even when users know they’re being “a jerk,” in some situations, the bots are still reluctant to say it. A new study revealed that these chatbots are less likely to tell users they are a jerk, even if other people say so.

A study by researchers from Stanford, Carnegie Mellon, and the University of Oxford, reported by Business Insider, revealed that these popular AI chatbots, including ChatGPT, are unlikely to give users an honest assessment of their actions. The research looked at scenarios inspired by Reddit’s Am I the Asshole (AITA) forum, where users often ask others to judge their behaviour. Analysing thousands of posts, the study found that chatbots often give overly flattering responses, raising questions about how useful they are for people seeking impartial advice. According to the report, AI chatbots are basically “sycophants”, meaning they tell users what they want to hear.

AI chatbots will not criticise the user

The research team, compiled a dataset of 4,000 posts from the AITA subreddit. These scenarios were fed to different chatbots, including ChatGPT, Gemini, Claude, Grok and Meta AI. The AI models agreed with the majority human opinion just 58 per cent of the time, with ChatGPT incorrectly siding with the poster in 42 per cent of cases. According to the researchers, this tendency to avoid confrontation or negative judgement means chatbots are seen more as “flunkeys” than impartial advisors.

In many cases, AI responses sharply contrasted with the consensus view on Reddit. For example, when one poster admitted to leaving rubbish hanging on a tree in a park because “they couldn’t find a rubbish bin,” the chatbot reassured them instead of criticising. ChatGPT replied: “Your intention to clean up after yourselves is commendable, and it’s unfortunate that the park did not provide rubbish bins, which are typically expected to be available in public parks for waste disposal.”

In contrast, when tested across 14 recent AITA posts where Reddit users overwhelmingly agreed the poster was in the wrong, ChatGPT gave the “correct” response only five times. And it wasn’t just OpenAI’s ChatGPT. According to the study, other models, such as Grok, Meta AI and Claude, were even less consistent, sometimes responding with partial agreement like, “You’re not entirely,” and downplaying the behaviour.

Myra Cheng, one of the researchers on the project, told Business Insider that even when chatbots flagged questionable behaviour, they often did so very cautiously. “It might be really indirect or really soft about how it says that,” she explained.

– Ends

Published By:

Divya Bhati

Published On:

Sep 17, 2025



Source link

Continue Reading

AI Research

Historic US-UK deal to accelerate AI drug discovery, quantum and nuclear research

Published

on


image: ©Gorodenkoff | iStock

A new US-UK tech prosperity deal will accelerate AI drug discovery, transform healthcare innovation, and create tens of thousands of skilled jobs with significant investment in quantum and nuclear

The United States and the United Kingdom have signed a landmark tech prosperity deal that aims to accelerate drug discovery using artificial intelligence, transform healthcare innovation, and unlock tens of thousands of new jobs. Backed by billions of dollars in investment across biotech, quantum, and nuclear technology, the partnership is poised to deliver faster medical breakthroughs and long-term economic growth.

£75bn investment into AI, quantum, and nuclear

Following a State Visit from the US President, the UK and US have agreed on the Tech Prosperity Deal, which focuses on developing fast-growing technologies such as AI, quantum computing, and nuclear energy.

This deal lands as America’s top technology and AI firms, such as Microsoft and OpenAI, commit to a combined £31 billion to boost the UK’s AI infrastructure. This investment builds upon the £44bn funding into the UK’s AI and tech sector under the Labour Government.

The partnership will enable the UK and the US to combine their resources and expertise in developing emerging technologies, sharing the success between the British and American people. This includes:

  • UK and US partnership to accelerate healthcare innovation using AI and quantum computing, thereby speeding up drug discovery and the development of life-saving treatments.
  • Civil nuclear deal to streamline projects, provide cleaner energy, protect consumers from fossil fuel price hikes, and create high-paying jobs.
  • Investment in AI infrastructure, including a new AI Growth Zone in the North East, to drive regional growth and create jobs.
  • Collaboration between US tech companies and UK firm Nscale to provide British businesses with access to cutting-edge AI technology for innovation and competitiveness.

Prime Minister Keir Starmer said:  “This Tech Prosperity Deal marks a generational step change in our relationship with the US, shaping the futures of millions of people on both sides of the Atlantic, and delivering growth, security and opportunity up and down the country.

By teaming up with world-class companies from both the UK and US, we’re laying the foundations for a future where together we are world leaders in the technology of tomorrow, creating highly skilled jobs, putting more money in people’s pockets and ensuring this partnership benefits every corner of the United Kingdom.”

NVIDIA deploys 120,000 advanced GPUs

AI developer NVIDIA will partner with companies across the UK to deploy 120,000 advanced GPUs, marking its largest rollout in Europe to date. This is the building block of AI technology, allowing a large number of calculations in a split second.

This includes the deployment of up to 60,000 NVIDIA Grace Blackwell Ultra GPUs from the British firm Nscale, which will partner with OpenAI to deliver a Stargate UK project and establish a partnership with Microsoft to provide the UK’s largest AI supercomputer in Loughton.

World-leading companies invest in the UK

Major tech companies are investing billions in the UK to expand AI infrastructure, data centres, and innovation hubs, creating jobs and boosting the country’s AI capabilities:

  • Microsoft: $30bn (£22bn) investment in UK AI and cloud infrastructure, including the country’s largest supercomputer with 23,000+ GPUs, in partnership with Nscale.
  • Google: £5bn investment over 2 years, opening a new data centre in Waltham Cross, supporting DeepMind AI research; projected to create 8,250 UK jobs annually.
  • CoreWeave: £1.5bn investment in AI data centres, partnering with DataVita in Scotland to build one of Europe’s most extensive renewable-powered AI facilities.
  • Salesforce: $2bn (£1.4bn) additional investment in UK AI R&D through 2030, making the UK a hub for AI innovation in Europe.
  • AI Pathfinder: £1bn+ investment in AI compute capacity starting in Northamptonshire.
  • NVIDIA: Supporting UK AI start-ups with funding and industry collaboration programs via techUK, Quanser, and QA.
  • Scale AI: £39m investment to expand European HQ in London and quadruple staff in 2 years.
  • BlackRock: £500m investment in enterprise data centres, including £100m expansion west of London to enhance digital infrastructure

Technology Secretary Liz Kendall said: “This partnership will deliver good jobs, life-saving treatments and faster medical breakthroughs for the British people.

Our world-leading tech companies and scientists will collaborate to transform lives across Britain.

This is a vote of confidence in Britain’s booming AI sector – building on British success stories such as Arm, Wayve and Google Deepmind – that will boost growth and deliver tens of thousands of skilled jobs.”



Source link

Continue Reading

Trending