Connect with us

AI Insights

xAI sues an ex-employee for allegedly stealing trade secrets about Grok

Published

on


xAI doesn’t want its secret recipe for Grok to get out, and it’s filing a lawsuit to make sure of that. In a lawsuit filed earlier this week, xAI claimed that former employee Xuechen Li stole the company’s confidential info and trade secrets before joining the team at OpenAI.

Elon Musk’s artificial intelligence company also alleged that Li copied documents from an xAI company laptop to at least one of his personal devices. According to the suit, Li stole “cutting-edge AI technologies with features superior to those offered by ChatGPT and other competing products. This confidential info could result in a potential edge for rival companies in the AI market and “could save OpenAI and other competitors billions in R&D dollars and years of engineering effort,” xAI said in the lawsuit. The company behind Grok accused Li of taking “extensive measures to conceal his misconduct,” including renaming files, compressing files before uploading them to his personal devices and deleting browser history.

The lawsuit added that Li asked xAI to buy back company shares that were given as part of his compensation package, totaling approximately $7 million, before leaving the company to join OpenAI. xAI is asking the courts to file a temporary restraining order that forces its former employee to give up access to any personal devices or online storage services and return any confidential material to the company. On top of that, xAI wants to temporarily block Li from working at OpenAI or any other competitor until the company has recovered all of its trade secrets.

xAI’s lawsuit comes amidst a major talent war between leading AI companies looking for top researchers. These AI researchers are highly sought after, with competitors offering up to $250 million pay packages in attempts to poach them from their current companies. Beyond the AI talent war, Musk and xAI recently sued OpenAI and Apple, claiming the two companies are working together to maintain a monopoly on the AI market.



Source link

AI Insights

LifeLong Learning and TXST expand series on Artificial Intelligence

Published

on


Dr. Marianne Reese, Founder and Director of LifeLong Learning, conceived of the AI series due to AI’s exponential growth and the need for the public to understand its uses and limitations.

“AI is a relatively new tool that is being used in ways the public is often unaware of,” Reese noted. “We all need to know more about this powerful technology, understand AI’s positive and concerning applications, and learn the skills necessary to scrutinize the information it generates.

“AI will become increasingly prevalent, so we need to be informed consumers as AI impacts politics, medicine, business, finance and other areas of our lives,” Reese said.

The AI Learning Series is led by Dr. Kimberly Conner, Digital Strategy Lead for Information Technology at Texas State. Connor’s role is to help demystify innovation and make technology approachable for students, staff and faculty. With a rare combination of expertise in law, education and IT, Dr. Connor bridges the gap between complex digital tools and the people who use them.

Almost 80 lifelong learners attended the AI Series Kickoff Event on Tuesday, Aug. 19.

The Sept. 3 class covers AI use of our personal data and AI-generated misinformation and scams.

The Sept. 17 class features a comparison of different AI services (e.g., Chat GPT, Gemini).

The Oct. 1 class covers practical AI tools for daily life, with an exploration of AI applications for communication and creative projects.

The Oct. 15 class covers AI reliability & accuracy, AI limitations and and best practices for verification.

The Sept. 29 class covers AI for personal enrichment, such as enhancing hobbies and expanding personal interests.

The final class on Nov. 3 covers hands-on activities and features a closing presentation.

For more information visit their website at lllsanmarcos.org.



Source link

Continue Reading

AI Insights

China Calls for Regulation of Investment in Artificial Intelligence

Published

on


In a move reflecting a cautious strategic direction, China has called for curbing “excessive investment” and “random competition” in the artificial intelligence sector, despite its classification as a key driver of national economic growth and a critical competitive field with the United States.

Chang Kailin, a senior official at the National Development and Reform Commission – the highest economic planning body in the country – confirmed that Beijing will take a coordinated and integrated approach to developing artificial intelligence across various provinces, focusing on leveraging the advantages and local industrial resources of each region to avoid duplicating efforts, warning against “herd mentality” in investment without careful planning.

These statements come amid a contraction in China’s manufacturing industries for the fifth consecutive month, reflecting the pressures faced by the world’s second-largest economy, as policymakers attempt to avoid repeating past mistakes like those in the electric vehicle sector, which led to an oversupply of production capacity and subsequent deflationary pressures.

Chinese President Xi Jinping also warned last month against the rush of local governments towards artificial intelligence without proper planning, a clear indication of the Chinese leadership’s desire to regulate the pace of growth in this vital sector.

Despite these warnings, China continues to accelerate the development, application, and governance of artificial intelligence, as the government revealed a new action plan last week aimed at boosting this sector, which includes significant support for private companies and encouragement for the emergence of strong startups capable of global competition, which the National Committee described as a pursuit for the emergence of “black horses” in the innovation race, implicitly referring to notable success stories like the Chinese company DeepMind.

DeepMind gained international fame earlier this year after launching a powerful and low-cost artificial intelligence model, competing with the models of major American companies, igniting a wave of local and international interest in Chinese technologies.

In a separate context, a Bloomberg analysis showed that Chinese technology companies plan to install more than 115,000 artificial intelligence chips produced by the American company Nvidia in massive data centers being built in the desert regions of western China, indicating a continued effort to build strong artificial intelligence infrastructure despite regulatory constraints.

These steps come at a time when Beijing seeks to balance support for technological innovation with regulating investment chaos, in an attempt to shape a more sustainable path for the growth of artificial intelligence within China’s broader economic vision.



Source link

Continue Reading

AI Insights

A new research project is the first comprehensive effort to categorize all the ways AI can go wrong, and many of those behaviors resemble human psychiatric disorders.

Published

on


Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That’s why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.

In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is “Psychopathia Machinalis” — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.



Source link

Continue Reading

Trending