Connect with us

AI Insights

U.S. artificial intelligence (AI) semiconductor companies Nvidia and AMD have decided to pay 15% of ..

Published

on


“Export-revenue ‘exchange’ is unprecedented”

The flags of the United States and China are on printed circuit boards equipped with semiconductor chips. Reuters Yonhap News

U.S. artificial intelligence (AI) semiconductor companies Nvidia and AMD have decided to pay 15% of profits from sales of semiconductors to the U.S. government in exchange for exporting semiconductors to China. In other words, the two companies made a deal with U.S. President Donald Trump to obtain export licenses.

Citing sources, the Financial Times (FT) reported on the 10th (local time) that “NVIDIA has agreed to pay 15% of profits from H20 chip sales in China to the government, and AMD has also decided to pay 15% of MI308 chip profits.” It is reported that the US government has not yet decided where to use the funds received.

Earlier on the 8th, FT reported that the U.S. Department of Commerce’s Industrial Security Administration began issuing export licenses to Nvidia. It has been two days since Nvidia CEO Jensen Huang met with President Trump on the 6th to discuss export permits. The issuance of export licenses to China for AMD has also begun.

The FT said, “It is unprecedented for a U.S. company to pay a portion of its sales to the government to obtain export licenses,” adding, “It fits the way of the Trump administration, which has called for investment in the U.S. to avoid tariffs.”

There is criticism in the U.S. over the resumption of H20 sales. Security experts warn that H20 could be exploited to strengthen China’s military and AI competitiveness. Liza Tobin, a Chinese expert who served on the National Security Council (NSC) during the first Trump administration, sarcastically said, “China will be enjoying the U.S. government’s conversion of export licenses into revenue sources,” adding, “Will Lockheed Martin be allowed to sell F-35 fighter jets to China for a 15% commission next time?”

AMD did not respond to FT’s request for comment. “We abide by the rules set by the U.S. government for market participation around the world,” Nvidia said, without denying it had agreed to the agreement.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

DVIDS – News – ARMY INTELLIGENCE

Published

on



by Holly Comanse

Artificial intelligence (AI) may be trending, but it’s nothing new. Alan Turing, an English mathematician said to be the “father of theoretical computer science,” conceptualized the term algorithm to solve mathematical problems and test machine intelligence. Not long after, the very first AI chatbot, called ELIZA, was released in 1966. However, it was the generative pre-trained transformer (GPT) model, first introduced in 2018, that provided the foundation for modern AI chatbots.

Both AI and chatbots continue to evolve with many unknown variables related to accuracy and ethical concerns, such as privacy and bias. Job security and the environmental impact of AI are other points of contention. While the unknown may be distressing, it’s also an opportunity to adjust and adapt to an evolving era.

The 1962 television cartoon “The Jetsons” presented the concept of a futuristic family in the year 2062, with high-tech characters that included Rosie the Robot, the family’s maid, and an intelligent robotic supervisor named Uniblab. Such a prediction isn’t much of a stretch anymore. Robotic vacuums, which can autonomously clean floors, and other personal AI assistant devices are now commonplace in the American household. A recent report valued the global smart home market at $84.5 billion in 2024 and predicted it would reach $116.4 billion by 2029.

ChatGPT, released in 2022, is a widely used AI chat-based application with 400 million active users. It independently browses the internet, allowing for more up-to-date and automatic results, but it’s used for conversational topics, not industry-specific information. The popularity and limitations of ChatGPT led some organizations to develop their own AI chatbots with the ability to reflect current events, protect sensitive information and deliver search results for company-specific topics.

One of those organizations is the U.S. Army, which now has an Army-specific chatbot known as CamoGPT. Currently boasting 75,000 users, CamoGPT started development in the fall of 2023 and was first deployed in the spring of 2024. Live data is important for the Army and other companies that choose to implement their own AI chat-based applications. CamoGPT does not currently have access to the internet because it is still in a prototype phase, but connecting to the net is a goal. Another goal for the platform is to accurately respond to questions that involve current statistics and high-stakes information. What’s more, CamoGPT can process classified data on SIPRNet and unclassified information on NIPRNet.

THE MORE THE MERRIER
Large language models (LLMs) are a type of AI chatbot that can understand and generate human language based on inputs. LLMs undergo extensive training, require copious amounts of data and can be tedious to create. They can also process and respond to information as a human would. Initially, the information fed to the bot must be input individually and manually by human beings until a pattern is established, at which point the computer can take over. Updating facts can be a daunting task when considering the breadth of data from around the world that AI is expected to process.

Aidan Doyle, a data engineer at the Army Artificial Intelligence Integration Center (AI2C), works on a team of three active-duty service members, including another data engineer and a data analyst, as well as four contracted software developers and one contracted technical team lead. “It’s a small team, roles are fluid, [and] everyone contributes code to Camo[GPT],” Doyle said.

Doyle’s team is working to transition CamoGPT into a program of record and put more focus into developing an Army-specific LLM. “An Army-specific LLM would perform much better at recognizing Army acronyms and providing recommendations founded in Army doctrine,” Doyle said. “Our team does not train LLMs; we simply host published, open-source models that have been trained by companies like Meta, Google and Mistral.”

The process of training LLMs involves pre-training the model by showing it as many examples of natural language as possible from across the internet. Everything from greetings to colloquialisms must be input so it can mimic human conversation. Sometimes, supervised learning is necessary for specific information during the fine-tuning step. Then the model generates different answers to questions, and humans evaluate and annotate the model responses and flag problems that arise. Once preferred responses are identified, developers adjust the model accordingly. This is a post-training step called reinforcement learning with human feedback, or alignment. Finally, the model generates both the questions and answers itself in the self-play step. When the model is ready, it is deployed.

A LITTLE TOO CREATIVE
The use of AI in creative fields faces significant challenges, such as the potential for plagiarism and inaccuracy. Artists spend a lot of time creating work that can easily be duplicated by AI without giving the artist any credit. AI can also repackage copyrighted material. It can be tough to track down the original source for something when it is generated with AI.

AI can—and sometimes does—introduce inaccuracies. When AI fabricates information unintentionally, it is called a hallucination. AI can make a connection between patterns it recognizes and pass them off as truth for a different set of circumstances. Facts presented by AI should always be verified, and websites such as CamoGPT often come with a disclaimer that hallucinations are possible. Journalists and content creators should be cautious with AI use as they run the risk of spreading misinformation.

Images and videos can also be intentionally manipulated. For example, AI-generated content on social media can be posted for shock value. Sometimes it’s easy to spot when something has been created by AI, and you can take it with a grain of salt. In other cases, social media trends can go viral before they are vetted.

For these reasons, the Army decided to implement CamoGPT. Not only does it currently have the ability to process classified information discreetly, but developmental advances will also ensure that CamoGPT provides minimal errors in its responses.

CONCLUSION
It’s becoming clear that analog is on the way out. Even older websites like Google and other search engines have started to prioritize AI summary results. Utilizing technology like AI, LLMs and other chatbots can save time and automate tedious tasks, which increases productivity and efficiency. CamoGPT is still evolving, and the team at AI2C is working hard to improve its accuracy and abilities. Other AI systems within the Army are still being developed, but the potential is limitless. While we may not be living in the future that the creator of “The Jetsons” predicted, we’re getting closer. In another 37 years, when 2062 rolls around, we may all be using flying cars—and those vehicles just might drive themselves with the help of AI.

For more information, go to https://www.camogpt.army.mil/camogpt.

HOLLY COMANSE provides contract support to the U.S. Army Acquisition Support Center from Honolulu, Hawaii, as a writer and editor for Army AL&T magazine and TMGL, LLC. She previously served as a content moderator and data specialist training artificial intelligence for a news app. She holds a B.A. in journalism from the University of Nevada, Reno.





Source link

Continue Reading

AI Insights

Big tech is offering AI tools to California students. Will it save jobs?

Published

on


By Adam Echelman, CalMatters

""
Students work in the library at San Bernardino Valley College on May 30, 2023. California education leaders are striking deals with tech companies to provide students with opportunities to learn AI. Photo by Lauren Justice for CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.

As artificial intelligence replaces entry-level jobs, California’s universities and community colleges are offering a glimmer of hope for students: free AI training that will teach them to master the new technology. 

“You’re seeing in certain coding spaces significant declines in hiring for obvious reasons,” Gov. Gavin Newsom said Thursday during a press conference from the seventh floor of Google’s San Francisco office.

Flanked by leadership from California’s higher education systems, he called attention to the recent layoffs at Microsoft, at Google’s parent company, Alphabet, and at Salesforce Tower, just a few blocks away, home to the tech company that is still the city’s largest private employer.

Now, some of those companies — including Google and Microsoft — will offer a suite of AI resources for free to California schools and universities. In return, the companies could gain access to millions of new users.

The state’s community colleges and its California State University campuses are “the backbone of our workforce and economic development,” Newsom said, just before education leaders and tech executives signed agreements on AI.

The new deals are the latest developments in a frenzy that began in November 2022, when OpenAI publicly released the free artificial intelligence tool ChatGPT, forcing schools to adapt.

The Los Angeles Unified School District implemented an AI chatbot last year, only to cancel it three months later without disclosing why. San Diego Unified teachers started using AI software that suggested what grades to give students, CalMatters reported. Some of the district’s board members were unaware that the district had purchased the software. 

Last month, the company that oversees Canvas, a learning management system popular in California schools and universities, said it would add “interactive conversations in a ChatGPT-like environment” into its software

To combat potential AI-related cheating, many K-12 and college districts are using a new feature from the software company Turnitin to detect plagiarism, but a CalMatters investigation found that the software accused students who did real work instead.

Mixed signals?

These deals are sending mixed signals, said Stephanie Goldman, the president of the Faculty Association of California Community Colleges. “Districts were already spending lots of money on AI detection software. What do you do when it’s built into the software they’re using?”

Don Daves-Rougeaux, a senior adviser for the community college system, acknowledged the potential contradiction but said it’s part of a broader effort to keep up with the rapid pace of changes in AI. He said the community college system will frequently reevaluate the use of Turnitin along with all other AI tools. 

California’s community college system is responsible for the bulk of job training in the state, though it receives the least funding from the state per student. 

“Oftentimes when we are having these conversations, we are looked at as a smaller system,” said Daves-Rougeaux. The state’s 116 community colleges collectively educate roughly 2.1 million students.

In the deals announced Thursday, the community college system will partner with Google, Microsoft, Adobe and IBM to roll out additional AI training for teachers. Daves-Rougeaux said the system has also signed deals that will allow students to use exclusive versions of Google’s counterpart to ChatGPT, Gemini, and Google’s AI research tool, Notebook LLM. Daves-Rougeaux said these tools will save community colleges “hundreds of millions of dollars,” though he could not provide an exact figure. 

“It’s a tough situation for faculty,” said Goldman. “AI is super important but it has come up time and time again: How do you use AI in the classroom while still ensuring that students, who are still developing critical thinking skills, aren’t just using it as a crutch?”

One concern is that faculty could lose control over how AI is used in their classrooms, she added.

The K-12 system and Cal State University system are forming their own tech deals. Amy Bentley-Smith, a spokesperson for the Cal State system, said it is working on its own AI programs with Google, Microsoft, Adobe and IBM as well as Amazon Web Services, Intel, LinkedIn, Open AI and others. 

Angela Musallam, a spokesperson for the state government operations agency, said California high schools are part of the deal with Adobe, which aims to promote “AI literacy,” the idea that students and teachers should have basic skills to detect and use artificial intelligence.

Much like the community college system, which is governed by local districts, Musallam said individual K-12 districts would need to approve any deal. 

Will deals make a difference to students, teachers?

Experts say it’s too early to tell how effective AI training will actually be.

Justin Reich, an associate professor at MIT, said a similar frenzy took place 20 years ago when teachers tried to teach computer literacy. “We do not know what AI literacy is, how to use it, and how to teach with it. And we probably won’t for many years,” Reich said. 

The state’s new deals with Google, Microsoft, Adobe and IBM allow these tech companies to recruit new users — a benefit for the companies — but the actual lessons aren’t time-tested, he said. 

“Tech companies say: ‘These tools can save teachers time,’ but the track record is really bad,” said Reich. “You cannot ask schools to do more right now. They are maxed out.”

Erin Mote, the CEO of an education nonprofit called InnovateEDU, said she agrees that state and education leaders need to ask critical questions about the efficacy of the tools that tech companies offer but that schools still have an imperative to act. 

“There are a lot of rungs on the career ladder that are disappearing,” she said. “The biggest mistake we could make as educators is to wait and pause.”

Last year, the California Community Colleges Chancellor’s Office signed an agreement with NVIDIA, a technology infrastructure company, to offer AI training similar to the kinds of lessons that Google, Microsoft, Adobe and IBM will deliver. 

Melissa Villarin, a spokesperson for the chancellor’s office, said the state won’t share data about how the NVIDIA program is going because the cohort of teachers involved is still too small. 

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.



Source link

Continue Reading

AI Insights

Warren County Schools hosts AI Workshop

Published

on


KENTUCKY — On this week’s program, we’re keeping you up to date on happenings within Kentucky’s government, which includes ongoing work this summer with legislative committees and special task forces in Frankfort.

During this “In Focus Kentucky” segment, reporter Aaron Dickens shared how leaders in Warren County Public Schools are helping educators bring their new computer science knowledge to the front of classrooms.

Also in this segment, we shared details about the U.S. Department of Energy selecting the Paducah Gaseous Diffusion Plant site in Paducah, Kentucky, as one of four sites for the development of artificial intelligence data centers and associated energy infrastructure. This initiative is reportedly part of the Trump administration’s plan to accelerate AI development, hoping to leverage federal land assets to establish high-performance computing facilities and reliable energy sources for the burgeoning AI industry.

You can watch the full “In Focus Kentucky” segment in the player above.




Source link

Continue Reading

Trending