Connect with us

Tools & Platforms

A.I. as normal technology (derogatory)

Published

on


I 100% agree that new knowledge can’t be created directly by LLMs — it’s a fair criticism and important clarification.

I should’ve used clearer terms – I was referring to searching existing knowledge and information, being able to find relevant existing knowledge very easily, and associate it easily with the structure of what you are working on. Not in having an LLM somehow synthesize new original discoveries completely independently.

There is a lot of hope for the claim that LLMs will be useful for us all. I didn’t mean to imply that LLMs, on their own, will create useful things for us. They should be thought of as tools. LLM technology is already providing useful capabilities to many folks in real world scenarios.

In my field of engineering, I use LLMs in many very helpful ways. I don’t use chat bots directly. LLMs help me automate building out of existing patterns and combine together existing patterns across engineering designs in ways that save me hours of work every day and that allow me to actually create better designs. I can easily find existing solutions to problems I have, and then have those solutions very quickly incorporated into the structure of my own designs. This is what LLMs are great at and where the statistical matching comes in.

The point I was making was that LLMs, incorporated and integrated along with traditional software and ways of organizing data, can be used by people in ways that are going to be super useful. Used be people being the key… A person is coming up with ideas.

A Chat Bot is one type of software product. Other software implementations use LLMs under the covers, similar to how many apps use databases, and software that is useful will incorporate what LLMs are good at, and not use them in ways they don’t work.

This is why I was mentioning that we need to separate “AI” from the specific example of OpenAI/ChatGPT. Some apps and use cases will be useful, others maybe not.

And so criticisms of ChatGPT are criticisms of ChatGPT and how it’s used or what it specifically does. Not of AI in general, necessarily, and not of the potential for great things. But 100% agreed that when discussing the potential for good, it has to actually make sense to what the underlying technology actually does and doesn’t do, which is very mathematical and technical.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Nvidia says GAIN AI Act would restrict competition, likens it to AI Diffusion Rule

Published

on

By


Nvidia said on Friday the AI GAIN Act would restrict global competition for advanced chips, with similar effects on the US leadership and economy as the AI Diffusion Rule, which put limits on the computing power countries could have.

Short for Guaranteeing Access and Innovation for National Artificial Intelligence Act, the GAIN AI Act was introduced as part of the National Defense Authorization Act and stipulates that AI chipmakers prioritize domestic orders for advanced processors before supplying them to foreign customers.

“We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.

If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps.

“It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.

The rules mirror some conditions under former U.S. President Joe Biden’s AI diffusion rule, which allocated certain levels of computing power to allies and other countries.

The AI Diffusion Rule and AI GAIN Act are attempts by Washington to prioritize American needs, ensuring domestic firms gain access to advanced chips while limiting China’s ability to obtain high-end tech amid fears that the country would use AI capabilities to supercharge its military.

Last month, President Donald Trump made an unprecedented deal with Nvidia to give the government a cut of its sales in exchange for resuming exports of banned AI chips to China.



Source link

Continue Reading

Tools & Platforms

Apple sued by authors over use of books in AI training

Published

on

By


Technology giant Apple was accused by authors in a lawsuit on Friday of illegally using their copyrighted books to help train its artificial intelligence systems, part of an expanding legal fight over protections for intellectual property in the AI era.

The proposed class action, filed in the federal court in Northern California, said Apple copied protected works without consent and without credit or compensation.

“Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture,” according to the lawsuit, filed by authors Grady Hendrix and Jennifer Roberson.

Apple and lawyers for the plaintiffs did not immediately respond to requests for comment on Friday.

The lawsuit is the latest in a wave of cases from authors, news outlets and others accusing major technology companies of violating legal protections for their works.

Artificial intelligence startup Anthropic on Friday disclosed in a court filing in California that it agreed to pay $1.5 billion to settle a class action from a group of authors who accused the company of using their books to train its AI chatbot Claude without permission.

Anthropic did not admit any liability in the accord, which lawyers for the plaintiffs called the largest publicly reported copyright recovery in history.

In June, Microsoft was hit with a lawsuit by a group of authors who claimed the company used their books without permission to train its Megatron artificial intelligence model. Meta Platforms and Microsoft-backed OpenAI also have faced claims over the alleged misuse of copyrighted material in AI training.

The lawsuit against Apple accused the company of using a known body of pirated books to train its “OpenELM” large language models.

Hendrix, who lives in New York, and Roberson in Arizona, said their works were part of the pirated dataset, according to the lawsuit.



Source link

Continue Reading

Tools & Platforms

Anthropic settles, Apple sued: Tech giant faces lawsuit over AI copyright dispute

Published

on


Apple has been drawn into the growing legal battle over the use of copyrighted works in artificial intelligence training, after two authors filed a lawsuit in the United States accusing the technology giant of misusing their books.

Claims of pirated dataset use

The proposed class action, lodged on Friday in federal court in Northern California, alleges that Apple copied protected works without permission, acknowledgement or compensation. Authors Grady Hendrix and Jennifer Roberson claim their books were included in a dataset of pirated material that Apple allegedly used to train its “OpenELM” large language models.

The filing argues that Apple has failed to seek consent or provide remuneration, despite the commercial potential of its AI systems. Both the company and the authors’ legal representatives declined to comment when approached.

Part of a wider copyright battle?

The case adds to a mounting wave of litigation targeting technology firms over intellectual property in the AI age. Earlier this week, AI start-up Anthropic disclosed it had reached a $1.5 billion settlement with a group of authors who accused the company of using their books to develop its Claude chatbot without authorisation. The payout, which Anthropic agreed to without admitting liability, has been described by lawyers as the largest publicly reported copyright settlement to date.

Lawyers who represented the authors against Anthropic described the accord as unprecedented. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from pirate websites is wrong,” said Justin Nelson of Susman Godfrey.

The settlement is among the first to be reached in a wave of copyright lawsuits filed against AI firms, including Microsoft, OpenAI, Meta and Midjourney, over their use of proprietary online content. Some competitors have pre-emptively struck licensing deals with publishers to avoid litigation; Anthropic has not disclosed any such agreements.

(With inputs from Reuters)



Source link

Continue Reading

Trending