Connect with us

AI Insights

Leveraging explainable artificial intelligence for early detection and mitigation of cyber threat in large-scale network environments

Published

on


This work introduces a novel LXAIDM-CTLSN method. The aim is to detect and classify cyberattacks to achieve cybersecurity. The LXAIDM-CTLSN model encompasses processes such as data normalization, MOA-based feature selection, SDAE-based cyberthreat detection, HOA-based parameter selection, and the LIME process, shown in Fig. 2. The novelty of the LXAIDM-CTLSN model is in its integrated and customized framework that integrates min-max normalization, MOA-based feature selection, and SDAE-based cyberattack detection. Enhanced further by the HOA model for hyperparameter tuning and LIME for explainable threat classification, this comprehensive approach outperforms conventional methods in both performance and interpretability.

Fig. 2

Overall process of LXAIDM-CTLSN method.

Data normalization

In the initial phase, the LXAIDM-CTLSN method performs data normalization using the Min-max normalization technique34. Min-max normalization is a data preprocessing method that converts features into the typical scale, usually from zero to one. Based on cyberattacks, min-max normalization standardizes several attack indicators, namely response times, frequency of attacks, and severity scores. This is vital for successful threat comparison and detection, allowing cybersecurity systems to detect anomalies and patterns accurately. Data normalization makes it easy to analyze and integrate data from numerous sources, augmenting the performance of threat detection systems.

MOA-based feature selection

The MOA is used for the feature selection method, which sequentially decreases the computational complexity35. This model is chosen due to its efficiency in mitigating computational complexity while maintaining high accuracy. Unlike conventional methods, MOA can effectually navigate large and intrinsic datasets by selecting the most relevant features, thus enhancing the overall model performance. Its behaviour of replicating the mayfly’s swarm behaviour allows it to explore the feature space effectively, avoiding the risk of local optima. Also, MOA is computationally less expensive than other optimization techniques, making it more appropriate for real-time applications. By mitigating the number of features, MOA not only speeds up the training process but also assists in mitigating the risk of overfitting, ensuring a more robust model. This makes it an ideal choice for handling large-scale datasets in cybersecurity applications. Figure 3 illustrates the steps involved in the MOA technique.

Fig. 3
figure 3

Steps involved in the MOA model.

This study discusses the usage of MOA as an optimization algorithm. A comparative study was conducted using PSO and FA with a similar objective function to evaluate the effectiveness of the MOA model. This study sheds light on the mayfly (MF) species, well-known for its extraordinarily short twenty-four‐hour lifespan. The researcher workers witnessed a difference between female and male MFs within flocks; due to natural force differences, male MFs constantly show higher optimization levels than female MFs. These characteristics resemble PSO, where individuals in the PSO, similar to MFs, update position \(X\left(t\right)\) and velocity \(v\left(t\right)\) according to the present state.

(1) The actions of male mayflies

In the context of MOA, males change their location according to their individual speed. \({x}_{i}\) refers to the area of the \({i}^{th}\) male MFs at \({t}^{th}\) existing time step in the search range.

$${x}_{i}\left(t+1\right)={x}_{i}\left(t\right)+{v}_{i}\left(t+1\right)$$

(1)

The male MF actively engages in exploration and exploitation duties at the initial iteration. During velocity updating, the MF considers the existing fitness values as \(\left({x}_{i}\right)\), and\(f\left({x}_{hi}\right)\) is the better fitness value witnessed on its trajectory in the past. The males alter the velocity if \((X)\)is higher than \(\left({x}_{hi}\right)\). This can be defined by three major factors: its existing speed, the separation between its exiting and the best locations, and the best trajectory in the past. Also, it allows the males to enhance their movement strategy once they observe fitness progress.

$${v}_{i}\left(t+1\right)=g.{v}_{i}\left(t\right)+{a}_{i}{e}^{-\beta{r}_{p}^{2}}\left[{X}_{hi}-{x}_{i}\left(t\right)\right]+a2{e}^{-\beta{r}_{g}^{2}}\left[{x}_{g}-{x}_{i}\left(t\right)\right]$$

(2)

The linear descent of the variable \(g\) from maximum to minimum values is directed by weight parameters \(a1,a2\), and \(\beta\). The \({r}_{p}\) and \({r}_{g}\) are variables that evaluate the Cartesian distance between prior best placements and individuals. The second norm is calculated for the distance array in Cartesian space. The distances between individuals and the past best location within the swarm are as follows:

$$\left|\left|{x}_{i}-{x}_{j}\right|\right|=\sqrt{{\sum}_{k=1}^{n}({x}_{ik}-{x}_{jk}{)}^{2}}$$

(3)

The male MF uses a random dance coefficient, represented by ‘\(d,\)‘ to update the velocity from the present value once the fitness values \(f\left({x}_{i}\right)\) are less than \(f\left({x}_{hi}\right)\). \({r}_{i}\) is a uniform distribution random value in [1,1].

$${v}_{i}(t+1)=g\left({v}_{i}\right(t)+d.{r}_{i}$$

(4)

(2) The actions of female mayflies

Compared to male MFs Female, MFs show different behaviours. They actively find males for the breeding purpose rather than congregating. Consider that it is the existing location of the females in the search range at \(t\) time; its location can be changed by adding the speed \({v}_{i}\left(t1\right)\) to the present location as follows:

$${y}_{i}\left(t+1\right)={y}_{i}\left(t\right)+{v}_{i}\left(t+1\right)$$

(5)

Different techniques are used to update females’ speed. Wingless females usually have a lifespan of 1–7 days. They exhibit a sense of urgency in locating males for breeding and mating. In response to the actions and characteristics of the chosen male MF, the females adapt their velocity.

When \(\left({y}_{i}\right)>f\left({x}_{i}\right)\), then the \({i}^{th}\) female MF utilizes Eq. (6) for updating the velocity. Here, the speed can be adjusted by the further constant, \({a}_{3}\), and \({r}_{m}\)is the Cartesian distance between them.

$${v}_{i}\left(t+1\right)=g.{v}_{i}\left(t\right)+{a}_{3}{e}^{-\beta{r}_{mf}^{2}}\left[{x}_{i}\left(t\right)-{y}_{i}\left(t\right)\right]$$

(6)

If \(f\left({y}_{i}\right), then females alter the velocity speed using a random dance coefficient represented by \(fl\). \({r}_{2}\) is a variable representing a random number within [\(\text{1,1}\)].

$${v}_{i}\left(t\right)=g.{v}_{i}\left(t\right)+fl.{r}_{2}$$

(7)

(3) Mayflies mating

Most male and female MFs participate in mating, which leads to offspring production. They inherit qualities from their parents and undergo random evolutionary changes where \(L\) is the set of random numbers from the Gaussian distribution.

$$Offspring1=L\text{*}male+\left(1-L\right)*female$$

(8)

$$offspring2=L\text{*}female+\left(1-L\right)\text{*}male$$

(9)

(4) Mayflies variation

To overcome the premature convergence issues, the optimum value might be a local optimum instead of a global one, so a uniform distribution random number is introduced into the mutation process for offspring MFs. This can be mathematically modelled as follows:

$$\text{o}ffsprin{g}_{n}=\text{o}ffsprin{g}_{n}+\sigma.N\left(\text{0,1}\right)$$

(10)

In Eq. (10), \(\sigma\) indicates the standard deviation. \(N(0,1)\) is a uniform distribution with a mean of \(0\) and variance of \(1\). The mutant individual is estimated to be around 5% of the male MFs, rounded to the near whole number.

The FF considers the classifier outcomes and the number of features chosen. It reduces the classifier performance and the size of the selected features. Therefore, the succeeding FF is used to assess the individual solution.

$$Fitness=\alpha*ErrorRate+\left(1-\alpha\right)*\frac{\#SF}{\#All\_F}$$

(11)

In Eq. (11), \(ErrorRate\) denotes the classifier error rate. \(ErrorRate\) is evaluated as the amount of improper classification to the amount of classifier made within [0,1]. \(\#SF\) refers to the number of features chosen, and \(\#All\_F\) denotes the overall number of attributes in the dataset. \(\alpha\) controls the prominence of classifier quality and subset length.

Cyberthreat detection using SDAE

Next, the SDAE technique recognizes and classifies cyber threats36. This model is chosen because it can learn robust feature representations while effectually denoising input data. Unlike conventional models, SDAE can handle noisy and incomplete data, making it ideal for real-world cybersecurity scenarios where data may be corrupted or sparse. Its sparse structure forces the network to concentrate on the most significant features, improving its generalization and ability to detect complex threats. Additionally, SDAE is highly effective in detecting subtle cyberattack patterns, which other models may overlook. The method’s capability to learn hierarchical features from raw data ensures it can detect known and novel threats. Moreover, its unsupervised learning capability reduces the requirement for large labelled datasets, making it more flexible and scalable for various cybersecurity tasks. Figure 4 represents the architecture of SDAE.

Fig. 4
figure 4

An AE contains a decoder and an encoder where the encoder draws higher-dimension input instances to the lower‐dimension abstract representation to attain sample density and decrease dimensionality. At the same time, the decoder transforms the lower‐dimension demonstration into the predictable output for achieving the representation of an input. An AE displays effective non-linear extraction of feature capability that can get feature vectors, signify the structure of input data, and cover non-linear features.

The procedure of encoder and decoder as defined below:

$$h=f\left({W}_{1}x+{\lambda}_{1}\right)$$

(12)

$$y=g\left({W}_{2}x+{\lambda}_{2}\right)$$

(13)

Whereas \(y\) and \(x\) signify the output and input data, respectively; \(h\) signifies the reduction of dimensionality; \({W}_{1}\) and \({W}_{2}\) epitomize the weights of the encoder and decoder networks; \({\lambda}_{1}\) and \({\lambda}_{2}\) signify the unit bias of the output layer and the hidden layer (HL); and \(f\) and \(g\) represent the activation functions.

When equated beside input data, the \(h\) dimension decreases, but it still covers the key data of an input. Analyzing and processing \(h\) decreases the computational cost. AE feature extraction permits dealing with non-linear data constructs like load curves, which are shown in real-world use.

To rebuild an input, the objective of AE is to minimize the error of reconstruction that defines the intimacy between output and input. The error of reconstruction \({L}_{AE}\) is definite as below

$${L}_{AE}=\frac{1}{n}{\sum}_{i=1}^{n}({x}_{i}-{y}_{i}{)}^{2}$$

(14)

The enhancement of AE is achieved by including restraints that are dissimilar to \({L}_{AE}\). This presents noise coding and sparse constraint, which is known as SDAE. The sparse constraint denotes the defeat of a few neurons in HL to recover the computational efficacy and network speed and express higher-dimension data; therefore, the NN can yet remove the feature and form of the instance when there are numerous neurons in HL. The noise coding denotes including noise in an input dataset to improve the sturdiness of AE and allow the absorption of the vital features of input data.

Let’s assume \({a}_{h}^{\left(2\right)}\left({x}^{i}\right)\) signifies the neuron activation degree in \(h\); the \(\wedge\rho\wedge\rho\) of neuron on every training sample have been expressed below:

$$\rho=\frac{1}{m}{\sum}_{i=1}^{m}[{a}_{h}^{\left(2\right)}\left({x}^{i}\right)$$

(15)

The divergence of KL is applied to assess the sparsity of neurons. Its formulation is expressed below:

$$\:\sum\:KL(\rho\:\Vert\:\widehat{\rho\:})=\sum\:\left[\rho\:\log \widehat{\rho\:}+\left(1-\rho\:\right)\log\frac{1-\rho\:}{1-\widehat{\rho\:}}\right]$$

(16)

Here, \(\rho\) refers to a parameter of sparsity, which is near to \(0.\)

The noised input \(\wedge x \wedge x\) was produced by inserting noise at random into \(\chi\) initial input, and the equivalent output is \(\wedge\:y\wedge\:y.\) If \(\wedge\:y\wedge\:y\) generates \(x\) to the highest degree, it specifies that AE holds effectual strength. The \({L}_{DAE}\) of denoising AE is expressed below

$${L}_{DAE}=\frac{1}{n}\sum\limits_{i=1}^{n}({x}_{i}-{y}_{i})^{2}+\frac{\lambda}{2}\left(\Vert{W}_{1}^{2}{\Vert}_{F}^{2}+{\Vert}{W}_{2}^{2}{\Vert}_{F}^{2}\right)$$

(17)

Here, \(\lambda\) denotes the constraint of noise weight.

Finally, \({L}_{SDAE}\) of SDAE is modified as

$${L}_{SDAE}=\frac{1}{n}{\sum}_{i=1}^{n}({x}_{i}-{\widehat{y}}_{i}{)}^{2}+\beta{\sum}_{m=1}^{n}KL(\rho{\Vert}\rho)+\frac{\lambda}{2}(\Vert{W}_{1}^{2}{\Vert}_{F}^{2}+{\Vert}{W}_{2}^{2}{\Vert}_{F}^{2})$$

(18)

Here, \(\beta\) refers to the weight co-efficient of the sparsity penalty.

The SDAE is made by minimalizing the loss function of Eq. (18) over the gradient descent method.

Parameter optimizer

This work employs the HOA method for fine-tuning the hyperparameter included in the SDAE approach37. This methodology is chosen because it can effectively explore and exploit the solution space. HOA replicates the behaviour of hikers seeking the highest peak, effectively fine-tuning hyperparameters to improve the model’s performance. Unlike other optimization techniques, HOA strikes a balance between exploration and exploitation, preventing premature convergence and improving the search for optimal solutions. Its simplicity and adaptability allow it to work well with complex models and massive datasets, making it appropriate for various applications, including cybersecurity. Additionally, HOA is computationally efficient, which assists in real-time optimization tasks, reducing the time required for hyperparameter tuning. This makes it a robust choice for improving the accuracy and robustness of ML models. Figure 5 specifies the architecture of the HOA model.

Fig. 5
figure 5

Workflow of the HOA technique.

This section discusses the mathematical background and inspiration of HOA. In addition, the study defines the HOA technique and its computation difficulty. The HOA is inspired by the hiker’s experience attempting to summit mountain rocks, hills, or peaks.

High, steep trails and terrains decelerate the hikers and eventually increase the hike.

Hikers can estimate or determine the time taken to reach the peak equipped with an awareness of the terrain’s geography. This is similar to the agent searching for the global or local optima of the optimization problems. Furthermore, in the search for global optima, the agent becomes stuck in the search area due to its complication of optimization problems, and this might extend the time it will take to locate the global optima, which is much like how hikers experience during hiking.

The mathematical background of HOA is inspired by Tobler’s Hiking Function (THF), an exponential equation that defines a hiker’s speed, which considers the slope or steepness of the trail or terrain.

The THF is represented as follows:

$${\mathcal{W}}_{i,t}=6{e}^{-35|{S}_{jt}+005|}$$

(19)

In Eq. (19), \({\mathcal{W}}_{i,t}\) refers to hiker \(j\) velocity \((\)viz., \(km/h)\) at \({t}^{th}\) time or iteration, and \({S}_{i,t}\) denotes the slope of the trail or terrain.

$${S}_{i,t}=\frac{dh}{dx}=\text{t}\text{a}\text{n}{\theta}_{i,t},$$

(20)

In Eq. (20), \(dh\) and \(dx\) are the elevation variance and the hiker’s travel distances. \({\theta}_{i,t}\), refers to the inclination angle of the terrain or trail in \(\left[\text{0,5}{0}^{o}\right].\)

HOA exploits the advantages of the hiker’s social thinking and the cognitive capabilities of hiker individuals. The THF, the lead hiker location, and the actual hiker location define the actual or updated hiker’s velocity of initial velocity. Therefore, the present velocity of \(\:{j}^{th}\) hikers is as follows:

$$\:{\mathcal{W}}_{i,t}={\mathcal{W}}_{i,t-1}+{\gamma\:}_{i,t}\left({\beta\:}_{best}-{\alpha\:}_{i,t}{\beta\:}_{i,t}\right),$$

(21)

In Eq. (21), \(\:{\gamma\:}_{i,t}\) denotes a uniform distribution number in [0,1]; \(\:{\mathcal{W}}_{i,t}\) and \(\:{\mathcal{W}}_{i,t-1}\) are the present and initial velocities of the \(\:{j}^{th}\) hikers. \(\:{\beta\:}_{best}\) indicates the location of the lead hiker and \(\:{\alpha\:}_{i,t}\) shows the sweep factor (SF) of \(\:{j}^{th}\) hikers within [1,3]. The SF confirms that the hiker doesn’t stray far from the leader hikers; hence, they can perceive where the lead hikers are headed and receive signals from them.

The updated position \(\:{\beta\:}_{i,t+1}\) of hiker \(\:j\) is given by considering the hiker velocity in Eq. (19):

$$\:{\beta\:}_{i,t+1}={\beta\:}_{i,t}+{W}_{i,t}$$

(22)

In metaheuristic algorithms such as the HOA, the agent’s initial position is an essential factor that considerably affects the possibility of a feasible solution and the speed at which convergence is obtained. The HOA performs the random initialization method to initialize the agent location, although alternate methods, such as problem-specific initialization or heuristic-based approaches, also exist.

The initialization of hiker locations \(\:{\beta\:}_{i,t}\) can be defined by the \(\:{\varphi\:}_{j}^{1}\) and \(\:{\varphi\:}_{j}^{2}\) lower and upper bounds of the solutions as follows:

$$\:{\beta\:}_{i,t}={\varphi\:}_{j}^{1}+{\delta\:}_{j}\left({\varphi\:}_{j}^{2}-{\varphi\:}_{j}^{1}\right),$$

(23)

In Eq. (23), \(\:{\delta\:}_{j}\) refers to a uniform distribution integer in \(\:\left[\text{0,1}\right].\) \(\:{\varphi\:}_{j}^{1}\) and \(\:{\varphi\:}_{j}^{2}\) are the lower and upper boundaries of the \(\:{j}^{th}\) parameter. This profoundly affects the distance between the lead trails and other hikers. Moreover, the trail’s slope influences hikers’ velocity and the HOA’s exploitative and exploratory behaviours.

Increasing the SF range encourages an exploitation stage within the HOA. On the other hand, once the SF range is decreased, the HOA leans towards an exploratory stage. In addition, reducing the trail’s inclination angle tends to lead to the exploitation stage. This factor collaborates to shape the HOA’s performance and behaviour in resolving optimization problems.

The HOA is used to derive an FF to attain enhanced classifier outcomes. It defines a positive integer to characterize the better outcome of the candidate solution. Now, reducing the classifier error rate is regarded as an FF.

$$\:fitness\left({x}_{i}\right)=ClassifierErrorRate\left({x}_{i}\right)\:=\frac{No.\:of\:misclassified\:samples}{Total\:No.\:of\:samples}\times\:100$$

(24)

Model explanation of XAI using LIME

Finally, the LXAIDM-CTLSN method incorporates the XAI method LIME for the optimum understanding and explainability of the Blackbox technique for superior classification of cyberattacks. Interpretability is selected in cyber threat detection by combining LIME to improve the model’s accuracy38. LIME delivers clarifications for distinct predictions prepared by the method, presenting insights into the features donating to every identification decision.

LIME aids in understanding the model, delivering insights into the forecasts prepared by an assumed method. By interpreting the model’s reasoning, consumers can get assurance in its predictions. LIME attains this by clarifying distinct instances, which assess the model performance. The LIME formulation is shown in Eq. (25), with goals to minimize the loss function \(\:L\:\)while ensuring that the clarification closely looks like the original behaviour of the model. Where \(\:\varphi\:\left(x\right)\) denotes the clarification, for instance, \(\:x\) produced by the method \(\:\theta\:.\)

$$\:\phi\:(xf=\alpha\:rgmi{n}_{t}[L\left({\theta\:}_{t}\left(x\right),g\right)+\varOmega\:\left({\phi\:}_{t}\right)$$

(25)

Here, \(\:\theta\:\) signifies the interpretable method in the class G. \(\:g\) denotes the family of methods. \(\:\psi\:\left(x\right)\) represents the proximity evaluation of the neighbourhood employed to make the description, for instance. \(\:\varOmega\:\left({\phi\:}_{t}\right)\) denotes the complication of the model, just like the amount of features included. \(\:P\) signifies the possibility of \(\:x\:\)thatbelongs to an exact class.



Source link

AI Insights

California Advances Bill Regulating AI Companions

Published

on


A California bill aimed at regulating the use of artificial intelligence (AI) companion chatbots cleared a key legislative hurdle this week, as lawmakers sought to rein in these bots’ influence on the mental health of users.

Senate Bill 243, which advanced to the Assembly Committee on Privacy and Consumer Protection, marks one of the first major attempts in the U.S. to regulate AI companions especially for its impact on minors.

“Chatbots today exist in a federal vacuum. There has been no federal leadership — quite the opposite — on this issue, and has left the most vulnerable among us to fall prey to predatory practices,” said the bill’s lead author, Sen. Steve Padilla, D-San Diego, at a press conference.

“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products in real time,” Padilla continued. “The stakes are too high.”

Bill Provisions

The bill targets the rising popularity of AI chatbots marketed as emotional buddies, which have attracted millions of users, including teenagers. Padilla cited mounting alarm over incidents involving chatbot misuse.

In Florida, 14-year-old Sewell Setzer committed suicide after forming a romantic and emotional relationship with a chatbot. When Setzer said he was thinking about suicide, the chatbot did not provide resources to help him, his mother, Megan Garcia, said at the press conference.

Garcia has since filed a lawsuit against Character.ai, alleging that the company used “addictive” design features in its chatbot and encouraged her son to “come home” seconds before he killed himself. In May, a federal judge rejected Character.ai’s defense that its chatbots are protected by the First Amendment regarding free speech.

SB 243 would require chatbot companies to implement several safeguards:

  • Ban reward systems that encourage compulsive use.
  • Implement and publish a protocol for addressing thoughts of suicide and to direct users to suicide prevention hotlines.
  • Send reminders to the user at least every three hours that the chatbot is not human.
  • Annually report to the Office of Suicide Prevention how many times users have expressed suicidal thoughts, among other metrics, and publish the findings on the company’s website.
  • Regularly audit the chatbots using an independent third party and the findings must be publicly available.

Opposition to the Proposal

The technology industry opposes the bill, arguing that the definition of a “companion chatbot” isoverbroadand would include general purpose AI models, according to a July 1 letter sent to lawmakers by TechNet.

Under the bill, a “companion chatbot” is defined as an AI system with a natural language interface that “provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs.”

“There are several vague, undefined elements of the definition, which are difficult to determine whether certain models would be included in the bill’s scope,” wrote Robert Boykin, TechNet’s executive director for California and the Southwest.

“For example, what does it mean to ‘meet a user’s social needs,’ would a model that provides responses as part of a mock interview be meeting a user’s social needs?” Boykin asked.

Asked for his response to the industry’s objections, Padilla said tech companies themselves are being overly broad in their opposition.

The bottom line is that “we can capture the positive benefits of the deployment of this technology. At the same time, we can protect the most vulnerable among us,” Padilla said. “I reject the premise that it has to be one or the other.”

Read more: Senate Shoots Down 10-Year Ban on State AI Regulations

Read more: Amazon Executive Says Government Regulation of AI Could Limit Progress

Read more: What Amazon, Meta, Uber, Anthropic and Others Want in the US AI Action Plan



Source link

Continue Reading

AI Insights

The End of the Internet As We Know It

Published

on


The internet as we know it runs on clicks. Billions of them. They fuel ad revenue, shape search results, and dictate how knowledge is discovered, monetized, and, at times, manipulated. But a new wave of AI powered browsers is trying to kill the click. They’re coming for Google Chrome.

On Wednesday, the AI search startup Perplexity officially launched Comet, a web browser designed to feel more like a conversation than a scroll. Think of it as ChatGPT with a browser tab, but souped up to handle your tasks, answer complex questions, navigate context shifts, and satisfy your curiosity all at once.

Perplexity pitches Comet as your “second brain,” capable of actively researching, comparing options, making purchases, briefing you for your day, and analyzing information on your behalf. The promise is that it does all this without ever sending you off on a wild hyperlink chase across 30 tabs, aiming to collapse “complex workflows into fluid conversations.”

“Agentic AI”

The capabilities of browsers like Comet point to the rapid evolution of agentic AI. This is a cutting-edge field where AI systems are designed not just to answer questions or generate text, but to autonomously perform a series of actions and make decisions to achieve a user’s stated goal. Instead of you telling the browser every single step, an agentic browser aims to understand your intent and execute multi-step tasks, effectively acting as an intelligent assistant within the web environment. “Comet learns how you think, in order to think better with you,” Perplexity says.

Comet’s launch throws Perplexity into direct confrontation with the biggest gatekeeper of the internet: Google Chrome. For decades, Chrome has been the dominant gateway, shaping how billions navigate the web. Every query, every click, every ad. It’s all been filtered through a system built to maximize user interaction and, consequently, ad revenue. Comet is trying to blow that model up, fundamentally challenging the advertising-driven internet economy.

And it’s not alone in this ambitious assault. OpenAI, the maker of ChatGPT, is reportedly preparing to unveil its own AI powered web browser as early as next week, according to Reuters. This tool will likely integrate the power of ChatGPT with Operator, OpenAI’s proprietary web agent. Launched as a research preview in January 2025, OpenAI’s Operator is an AI agent capable of autonomously performing tasks through web browser interactions. It leverages OpenAI’s advanced models to navigate websites, fill out forms, place orders, and manage other repetitive browser-based tasks.

Operator is designed to “look” at web pages like a human, clicking, typing, and scrolling, aiming to eventually handle the “long tail” of digital use cases. If integrated fully into an OpenAI browser, it could create a full-stack alternative to Google Chrome and Google Search in one decisive move. In essence, OpenAI is coming for Google from both ends: the browser interface and the search functionality.

Goodbye clicks. Hello cognition

Perplexity’s pitch is simple and provocative: the web should respond to your thoughts, not interrupt them. “The internet has become humanity’s extended mind, while our tools for using it remain primitive,” the company stated in its announcement, advocating for an interface as fluid as human thought itself.

Instead of navigating through endless tabs and chasing hyperlinks, Comet promises to run on context. You can ask it to compare insurance plans. You can ask it to summarize a confusing sentence or instantly find that jacket you forgot to bookmark. Comet promises to “collapse entire workflows” into fluid conversations, turning what used to be a dozen clicks into a single, intuitive prompt.

If that sounds like the end of traditional Search Engine Optimization (SEO) and the death of the familiar “blue links” of search results, that’s because it very well could be. AI browsers like Comet don’t just threaten individual publishers and their traffic; they directly threaten the very foundation of Google Chrome’s ecosystem and Google Search’s dominance, which relies heavily on directing users to external websites.

Google’s Grip is Slipping

Google Search has already been under considerable pressure from AI native upstarts like Perplexity and You.com. Its own attempts at deeper AI integration, such as the Search Generative Experience (SGE), have drawn criticism for sometimes producing “hallucinations” (incorrect information) and awkward summaries. Simultaneously, Chrome, Google’s dominant browser, is facing its own identity crisis. It’s caught between trying to preserve its massive ad revenue pipeline and responding to a wave of AI powered alternatives that don’t rely on traditional links or clicks to deliver useful information.

Comet doesn’t just sidestep the old ad driven model, it fundamentally breaks it. There’s no need to sort through 10 blue links. No need to open 12 tabs to compare specifications, prices, or user reviews. With Comet, you just ask, and let the browser do the work.

OpenAI’s upcoming browser could deepen that transformative shift even further. If it is indeed designed to keep user interactions largely inside a ChatGPT-like interface instead of linking out, it could effectively create an entirely new, self-contained information ecosystem. In such a future, Google Chrome would no longer be the indispensable gateway for knowledge or commerce.

What’s at Stake: Redefining the Internet

If Comet or OpenAI’s browser succeed, the impact won’t be limited to just disrupting search. They will fundamentally redefine how the entire internet works. Publishers, advertisers, online retailers, and even traditional software companies may find themselves disintermediated—meaning their direct connection to users is bypassed—by AI agents. These intelligent agents could summarize their content, compare their prices, execute their tasks, and entirely bypass their existing websites and interfaces.

It’s a new, high-stakes front in the war for how humans interact with information and conduct their digital lives. The AI browser is no longer a hypothetical concept. It’s here.



Source link

Continue Reading

AI Insights

Microsoft Lays Off Staff as Savings From AI Top $500 Million

Published

on


Microsoft is ramping up internal use of artificial intelligence (AI) tools to cut costs and increase productivity, even as the company trims thousands of jobs across departments.

According to Bloomberg, Chief Commercial Officer Judson Althoff told employees in a recent presentation that AI is enhancing productivity across functions, including sales, customer service and software development, according to a person familiar with his remarks.

AI helped Microsoft save more than $500 million last year in its call centers alone and improved both employee and customer satisfaction, the person said.

The company is also using AI to manage interactions with smaller clients — an initiative that is still early-stage but already generating tens of millions of dollars in revenue, according to the same person.

Read more: Microsoft’s Nadella: AI Agents Serve as ‘Chiefs of Staff’

At the same time, Microsoft has announced job cuts of about 15,000 so far this year, with the latest round affecting customer-facing roles such as sales. The layoffs have raised concerns about AI displacing workersa trend echoed across the technology sector.

Salesforce has relegated 30% of its internal work to AI, enabling it to reduce hiring for some positions.

Tech isn’t the only industry facing the potential impact of AI in the workplace. Ford, JPMorgan and other companies have warned of the possibility of deep job cuts as AI continues to advance.

Read more: Microsoft to Cut 3% of Workforce While Reducing Management Layers

Althoff said Microsoft’s AI tools, including its Copilot assistant, could make them more effective salespeople. He said each seller is finding more leads, closing deals quicker and generating 9% more revenue with Copilot’s help.

Microsoft said in April that its GitHub Copilot has 15 million users and noted that AI now generates 35% of the code for new products, helping speed up development.

Other technology companies are making similar moves: Executives at Alphabet and Meta have noted that AI is now responsible for writing substantial amounts of code.

Microsoft declined to comment.

Read more: Microsoft Cuts Nearly 9K Jobs in 2025’s 4th Round of Layoffs

 



Source link

Continue Reading

Trending