Connect with us

AI Research

AI devours your information: It knows what you search for, do and upload — and uses that data | Technology

Published

on


Artificial intelligence is a data devourer. For effectiveness, it has to be, but scarcity of what it feeds on can be a serious problem, particularly for AI agents, the conversational robots with the ability to act on behalf of users to buy, respond to emails, and manage invoices and schedules, among dozens of other possibilities. To do so, they need to know about the person they are talking to, learn about their life, and violate their privacy — which sometimes, they have permission to do. Big tech companies are already investigating how to tackle this problem on several fronts. But in the meantime, according to Hervé Lambert, global consumer operations manager at Panda Security, AI access to data poses risks of “commercial manipulation, exclusion, or even extortion.”

AI’s problematic relationship with private information has been proven by researchers at University College London and the Mediterranea University of Reggio Calabria in a study presented at the USENIX security symposium in Seattle. According to the report, AI web browser assistants execute widespread tracking, profiling, and personalization practices that raise serious privacy concerns.

During tests employing a user profile invented by researchers, AI web browser assistants shared search information with their servers and even banking and health data, as well as the user’s IP address. All demonstrated the ability to guess attributes like age, sex, salary, and interests of users and they utilized such information to personalize responses, even during different navigation sessions. Just one assistant, Perplexity, did not reveal evidence of profiling or personalization.

“Although many people are aware that search engines and social media platforms compile information about them for targeted advertising, AI web browser assistants operate with unprecedented access to user online behavior in areas of their online life that should remain private. Even if they offer convenience, our findings show that sometimes, they do so at the cost of user privacy, without any transparency or consent and sometimes, in violation of privacy legislation and their company’s own terms of service. This collection and exchange of information is not trivial: in addition to the sale and exchange of data with third parties, in a world where mass hackings are frequent, there is no way of knowing what is happening with search history once it has been collected,” explains Anna Maria Mandalari, primary author of the study that was conducted by the UCL’s electronic and electrical engineering department.

Lambert agrees with the study’s conclusions. “Technology is collecting users’ data, even that which is personal, to train and improve intelligent and automatic learning models. This helps companies to offer — to put it diplomatically — more personalized services. But developing these new technologies, obviously, raises a host of questions and concerns about privacy and user consent. Ultimately, we don’t know how companies and their smart systems are using our personal data.”

Among the potential risks cited by Lambert are commercial and geopolitical manipulation, exclusion, extortion, and identity theft. These dangers exist even when users have given their consent, consciously or otherwise. “Platforms,” adds Lambert, “are updating their privacy policies and that’s a little suspicious. In fact, such updates — and this is important — include clauses that allow for the use of data.” But consumers, in the vast majority of cases, accept conditions without reading or thinking about them, to ensure continuity in the service or out of pure haste.

Google is one of the companies that recently changed its privacy terms to, according to an email sent to its users, “improve our services.” In that statement, it admits to its use of interactions with its AI applications through Gemini, and has launched a new function for those who wish to opt out. That is the so-called “temporary chat” feature, which allows for the elimination of recent queries, and avoids the company using them “to personalize” future queries or “to train models.”

The user has to be proactive to protect themselves from these functions by deactivating the “keep activity” function and by managing and deleting Gemini app activity. If they fail to do so, their lives will be shared with the company. “A subset of uploads submitted starting September 2 — like files, videos, screens you ask about, and photos shared with Gemini — will also be used to help improve Google services for everyone,” states the corporation. It will also utilize audios recorded by the AI tools and data from Gemini Live recordings.

“As before, when Google uses your activity to improve its services (including training generative AI models), it gets help from human reviewers. To protect your privacy, we disconnect chats from your account before sending them to service providers,” explains the company in its statement, in which it admits that, even though it is disconnected from the user’s account, it uses and has used personal data (“As before”) and that it sells or shares it (“sending them to service providers”).

Marc Rivero, lead security researcher at Kaspersky, agrees on the risks involved with the dissemination of information, pointing to the use of WhatsApp data for AI: “It raises serious privacy concerns. Private messaging apps are one of the most sensitive digital environments for users, as they contain intimate conversations, personal data, and even confidential information. Allowing an AI tool to automatically access these messages without clear and explicit consent undermines user trust.”

He adds: “From the cybersecurity perspective, this is also troubling. Cyber criminals are taking advantage more and more of AI to widen their attacks on social engineering and collection of personal data. If those attackers find a way to exploit this kind of interaction, we could be facing a new path to fraud, identity theft, and other criminal activities.”

WhatsApp insists that “your personal messages with friends and family are off limits.” Its AI is trained through direct interaction with the artificial intelligence application and according to the company, “you have to take action to start the conversation by opening a chat or sending a message to the AI. Only you or a group participant can initiate this, not Meta or WhatsApp. Talking to an AI provided by Meta doesn’t link your personal WhatsApp account information on Facebook, Instagram, or any other apps provided by Meta.” Nonetheless, it does offer a warning: “What you send to Meta may be used to provide you with accurate responses or to improve Meta’s AI models, so don’t send messages to Meta with information you don’t want it to know.”

Storage and archive transfer services have also come under questioning. The latest example took place after the popular site WeTransfer’s modification to its terms of service, which was seen as an ask for limitless access to user data to improve future artificial intelligence systems. In response to consumer concerns about the possible free use of their documents and creations, the company was forced to reformulate the clause, offering the clarification: “To be extra clear: YES — your content is always your content. In fact, section 6.2 of our Terms of Service clearly states that you ‘own and retain all right, title, and interest, including all intellectual property rights, in and to the Content.’ YES — you’re granting us permission to ensure we can run and improve the WeTransfer service properly. YES — our terms are compliant with applicable privacy laws, including the GDPR [the European Union’s General Data Protection Regulation]. NO — we are not using your content to train AI models. NO — we do not sell your content to third parties.”

Given the proliferation of intelligent devices, which go far beyond conversational AI chats, Eusebio Nieva, technical director of Check Point Software for Spain and Portugal, advocates for regulations that guarantee transparency and explicit consent, security regulations for devices, and prohibition and restrictions on high-risk providers, as seen in the European regulation. “Incidents of violations of privacy underline the need for consumers, regulators, and companies to work together to guarantee security,” he says.

Lambert agrees and calls for users and companies to take responsibility in this new panorama. He rejects the idea that preventative regulation represents a step backward in development. “Protecting our users does not mean that we are going to slow down; it means that, from the outset of a project, we include privacy and digital footprint protection, thereby becoming more effective and efficient in protecting our most important assets, which are our users.”

Alternatives being researched by companies

Tech companies are aware of the problem generated by the use of personal data, not just because of the ethical and legal privacy conflicts, but also because they say that limitations in accessing them are slowing development of their systems.

Meta founder Mark Zuckerberg has directed the work of its Superintelligence Lab towards “self-improving AI,” systems capable of increasing the performance of artificial intelligence through advancements in equipment (particularly processors), in programming (including self-programming) and through the AI itself training language learning models on which it is based.

And it’s not just experiences based on synthetic data — tools and guidelines are also employed in adapting behavior to user needs. The startup Sakana AI has created a system called Darwin Gödel Machine, in which an AI agent adapts its code to improve its performance carrying out the tasks that it is assigned.

All these advances toward AI that surpasses human intelligence by overcoming obstacles such as data limitations also carry risks. Chris Painter, policy director at the non-profit AI research organization METR, warns that if AI accelerates the development of its own capacities, it could also be used for pirating, weapons design, and human manipulation.

“The rise in geopolitical tensions, economic volatility and operational environments that are becoming more complex, alongside attacks that are carried out using AI, have left organizations more vulnerable to cyber threats,” says Agustín Muñoz-Grandes, director of Accenture Security in Spain and Portugal. “Cyber security can no longer be a last-minute fix. It should be integrated beginning with the design of every initiative using AI.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition



Source link

AI Research

Has artificial intelligence finally passed the Will Smith spaghetti test? – Sky News

Published

on



Has artificial intelligence finally passed the Will Smith spaghetti test?  Sky News



Source link

Continue Reading

AI Research

AI as a Researcher: First Peer-Reviewed Research Paper Written Without Humans

Published

on


Artificial intelligence has crossed another significant milestone that challenges our understanding of what machines can achieve independently. For the first time in scientific history, an AI system has written a complete research paper that passed peer review at an academic conference without any human assistance in the writing process. This breakthrough could be a fundamental shift in how scientific research might be conducted in the future.

Historic Achievement

A paper produced by The AI Scientist-v2 passed the peer-review process at a workshop in a top international AI conference. The research was submitted to an ICLR 2025 workshop, which is one of the most prestigious venues in machine learning. The paper was generated by an improved version of the original AI Scientist, called The AI Scientist-v2.

The accepted paper, titled “Compositional Regularization: Unexpected Obstacles in Enhancing Neural Network Generalization,” received impressive scores from human reviewers. Of the three papers submitted for review, one received ratings that placed it above the acceptance threshold. This breakthrough is a significant advancement as AI can now participate in the fundamental process of scientific discovery that has been exclusively human for centuries.

The research team from Sakana AI, working with collaborators from the University of British Columbia and the University of Oxford, conducted this experiment. They received institutional review board approval and worked directly with ICLR conference organizers to ensure the experiment followed proper scientific protocols.

How The AI Scientist-v2 Works

The AI Scientist-v2 has achieved this success due to several major advancements over its predecessor. Unlike its predecessor, AI Scientist-v2 eliminates the need for human-authored code templates, can work across diverse machine learning domains, and employs a tree-search methodology to explore multiple research paths simultaneously.

The system operates through an end-to-end process that mirrors how human researchers work. It begins by formulating scientific hypotheses based on the research domain it is assigned to explore. The AI then designs experiments to test these hypotheses, writes the necessary code to conduct the experiments, and executes them automatically.

What makes this system particularly advanced is its use of agentic tree search methodology. This approach allows the AI to explore multiple research directions simultaneously, much like how human researchers might consider various approaches to solving a problem. This involves running experiments via agentic tree search, analyzing results, and generating a paper draft. A dedicated experiment manager agent coordinates this entire process to ensure that the research remains focused and productive.

The system also includes an enhanced AI reviewer component that uses vision-language models to provide feedback on both the content and visual presentation of research findings. This creates an iterative refinement process where the AI can improve its own work based on feedback, similar to how human researchers refine their manuscripts based on colleague input.

What Made This Research Paper Special

The accepted paper focused on a challenging problem in machine learning called compositional generalization. This refers to the ability of neural networks to understand and apply learned concepts in new combinations they have never seen before. The AI Scientist-v2 investigated novel regularization methods that might improve this capability.

Interestingly, the paper also reported negative results. The AI discovered that certain approaches it hypothesized would improve neural network performance actually created unexpected obstacles. In science, negative results are valuable because they prevent other researchers from pursuing unproductive paths and contribute to our understanding of what does not work.

The research followed rigorous scientific standards throughout the process. The AI Scientist-v2 conducted multiple experimental runs to ensure statistical validity, created clear visualizations of its findings, and properly cited relevant previous work. It formatted the entire manuscript according to academic standards and wrote comprehensive discussions of its methodology and findings.

The human researchers who supervised the project conducted their own thorough review of all three generated papers. They found that while the accepted paper was of workshop quality, it contained some technical issues that would prevent acceptance at the main conference track. This honest assessment demonstrates the current limitations while acknowledging the significant progress achieved.

Technical Capabilities and Improvements

The AI Scientist-v2 demonstrates several remarkable technical capabilities that distinguish it from previous automated research systems. The system can work across diverse machine learning domains without requiring pre-written code templates. This flexibility means it can adapt to new research areas and generate original experimental approaches rather than following predetermined patterns.

The tree search methodology is a significant innovation in AI research automation. Rather than pursuing a single research direction, the system can maintain multiple hypotheses simultaneously and allocate computational resources based on the promise each direction shows. This approach mirrors how experienced human researchers often maintain several research threads while focusing most effort on the most promising avenues.

Another crucial improvement is the integration of vision-language models for reviewing and refining the visual elements of research papers. Scientific figures and visualizations are critical for communicating research findings effectively. The AI can now evaluate and improve its own data visualizations iteratively.

The system also demonstrates understanding of scientific writing conventions. It properly structures papers with appropriate sections, maintains consistent terminology throughout manuscripts, and creates logical flow between different parts of the research narrative. The AI shows awareness of how to present methodology, discuss limitations, and contextualize findings within existing literature.

Current Limitations and Challenges

Despite this historic achievement, several important limitations restrict the current capabilities of AI-generated research. The company said that none of its AI-generated studies passed its internal bar for ICLR conference track publication standards. This indicates that while the AI can produce workshop-quality research, reaching the highest tiers of scientific publication remains challenging.

The acceptance rates provide important context for evaluating this achievement. The paper was accepted at a workshop track, which typically has less strict standards than the main conference (60-70% acceptance rate vs. the 20-30% acceptance rates typical of main conference tracks. While this does not diminish the significance of the achievement, it suggests that producing truly groundbreaking research remains beyond current AI capabilities.

The AI Scientist-v2 also demonstrated some weaknesses that human researchers identified during their review process. The system occasionally made citation errors, attributing research findings to incorrect authors or publications. It also struggled with some aspects of experimental design that human experts would have approached differently.

Perhaps most importantly, the AI-generated research focused on incremental improvements rather than paradigm-shifting discoveries. The system appears more capable of conducting thorough investigations within established research frameworks than of proposing entirely new ways of thinking about scientific problems.

The Road Ahead

The successful peer review of AI-generated research is the beginning of a new era in scientific research. As foundation models continue improving, we can expect The AI Scientist and similar systems to produce increasingly sophisticated research that approaches and potentially exceeds human capabilities in many domains.

The research team anticipates that future versions will be capable of producing papers worthy of acceptance at top-tier conferences and journals. The logical progression suggests that AI systems may eventually contribute to breakthrough discoveries in fields ranging from medicine to physics to chemistry.

This development also raises important questions about research ethics and publication standards. The scientific community must develop new norms for handling AI-generated research, including when and how to disclose AI involvement and how to evaluate such work alongside human-generated research.

The transparency demonstrated by the research team in this experiment provides a valuable model for future AI research evaluation. By working openly with conference organizers and subjecting their AI-generated work to the same standards as human research, they have established important precedents for the responsible development of automated research capabilities.

The Bottom Line

The acceptance of an AI-written paper at a leading machine learning workshop is a significant advancement in AI capabilities. While the work is not yet at the level of top-tier conference, it demonstrates a clear trajectory toward AI systems becoming serious contributors to scientific discovery. The challenge now lies not only in advancing technology but also in shaping the ethical and academic frameworks that will govern this new frontier of research.



Source link

Continue Reading

AI Research

Building human skills key to surviving AI-driven job disruption, say experts

Published

on


The rise of artificial intelligence is already reshaping the global workforce, with experts warning that the ability to build skills such as judgment, empathy, adaptability and digital literacy will be essential to avoid being left behind.

As the technology evolves in waves, from automation to generative AI, agentic systems and eventually artificial general intelligence, millions risk losing their income and also their sense of purpose and identity.

Maha Hosain Aziz, professor at New York University and a member of the World Economic Forum’s Global Foresight Network, warned that the world rarely considers the broader social consequences of this disruption.

“We rarely connect the dots to what happens next – when millions lose not just income, but the anchor that work provides,” she wrote on the World Economic Forum’s platform.

“What happens when our education or years of work experience don’t matter as much any more? Many may face a grim choice: scramble to ‘learn AI’ to stay relevant – or drift into a new class, uncertain where they can fit in the AI economy.”

Ms Aziz outlined four waves of disruption, including traditional automation replacing routine jobs and generative AI transforming content creation and knowledge work.

Agentic AI is taking on multi-step tasks in areas such as HR, market research and IT, with the potential to replace midlevel managers.

By 2030, the world could see the rise of artificial general intelligence capable of most cognitive tasks.

“Each wave will displace another segment of the global working population,” Ms Aziz said.

“The challenge isn’t just how to re-employ people, but how to help them adapt to a future where their previous skills or identities may no longer be relevant. In a way, we’ve seen this before.”