Connect with us

AI Research

Mark Cuban says the US has got to keep investing in research if it wants to have a chance of beating China at AI

Published

on


“We need our Ph.D.s, our scientists, our experts to stay here and contribute to society, and their IP to make American AI models the global leaders,” Mark Cuban told Business Insider on Sunday.Jeff Schear via Getty Images
  • Mark Cuban says the US should not be paring down its research spending.

  • Cuban said the work produced can be licensed to AI companies to enhance their models.

  • He said this would offset research costs and keep the US ahead of China in AI.

“Shark Tank” star Mark Cuban says the US can beat China at AI if it continues “investing in research of all kinds as a country.”

“The IP we create domestically is what the frontier models can buy or invest in to define their differentiation and advance forward,” Cuban wrote on X on Saturday in response to a post by David Sacks, the White House’s AI and crypto czar, on the state of the AI race.

When asked about his X post on Sunday, Cuban told Business Insider that American research is “important, not just because of the outcome of the research itself, but its value to American frontier AI models” like ChatGPT and Gemini.

Cuban said that any unique intellectual property produced can be “licensed to the models, for a fee, to be included in their training.” This would not only offset research costs but also make the models more valuable, he added.

“The quality and depth of the research we do in this country can help us stay ahead of China and other countries in the AI race,” Cuban told Business Insider.

“We need our Ph.D.s, our scientists, our experts, to stay here and contribute to society, and their IP to make American AI models the global leaders,” he added.

Since taking office in January, President Donald Trump’s administration has been culling research grants for universities and research institutions like the National Institutes of Health (NIH).

Researchers and scientists told Business Insider’s Ayelet Sheffey in April that the cuts could stifle innovation and result in brain drain.

“It absolutely endangers the United States’ position as the global leader in medical research. And for that, we will pay,” Peter Lurie, a recipient of an NIH grant terminated in March, told Sheffey.

Staying ahead in the AI race has been a primary focus for the Trump administration, which unveiled its “AI Action Plan” last month. The 28-page plan calls for a light-touch approach to AI regulation compared to Trump’s predecessor, President Joe Biden.

In January, Chinese AI startup DeepSeek shocked the world with its high-performing but relatively cheap AI models. Trump said he viewed DeepSeek’s accomplishment “as a positive, as an asset” for America.

“The release of DeepSeek, AI from a Chinese company, should be a wake-up call for our industries that we need to be laser-focused on competing to win,” Trump told GOP lawmakers in January.

Read the original article on Business Insider



Source link

AI Research

Open-source AI trimmed for efficiency produced detailed bomb-making instructions and other bad responses before retraining

Published

on



  • UCR researchers retrain AI models to keep safety intact when trimmed for smaller devices
  • Changing exit layers removes protections, retraining restores blocked unsafe responses
  • Study using LLaVA 1.5 showed reduced models refused dangerous prompts after training

Researchers at the University of California, Riverside are addressing the problem of weakened safety in open-source artificial intelligence models when adapted for smaller devices.

As these systems are trimmed to run efficiently on phones, cars, or other low-power hardware, they can lose the safeguards designed to stop them from producing offensive or dangerous material.



Source link

Continue Reading

AI Research

Ivory Tower: Dr Kamra’s AI research gains UN spotlight

Published

on


Dr Preeti Kamra, Assistant Professor in the Department of Computer Science at DAV College, Amritsar, has been invited by the United Nations to address its General Assembly on United Nations Digital Cooperation Day, held during the High-Level Week of the 80th session of the UN General Assembly. An educator and researcher, Dr Kamra has been extensively working in the fields of emerging digital technologies and internet governance.

Holding a PhD in Artificial Intelligence-based technology, Dr Kamra developed AI software to detect anxiety among students and is currently in the process of documenting and patenting this technology under her name. However, it was her work in Internet governance that earned her the invitation to speak at the UN.

“I have been invited to speak at an exclusive, closed-door event hosted annually by the United Nations, United Nations Digital Cooperation Day, which focuses on emerging technologies worldwide. I will be the only Indian speaker at the event and my speech will focus on policies in India aimed at making the Internet more secure, safe, inclusive, and accessible,” Dr Kamra said. “There is a critical need to make the Internet multilingual, accessible and safe in India, especially with the growing use of AI in the future, making timely action imperative.”

Last year, Dr Kamra participated in the Asia-Pacific Regional Forum on Internet Governance held in Taiwan. Her research on AI in education secured her a seat at this prestigious UN event. According to her, AI in education should be promoted, contrary to the reservations many educators globally hold.

“Despite NEP 2020 and the Government of India promoting Artificial Intelligence in higher education, few state-level universities, schools, or colleges have adopted it fully. The key is to use AI productively, which requires laws and policies that regulate its usage, while controlling and monitoring potential abuse,” she explained.

The event is scheduled to take place from September 22 to 26 at the United Nations headquarters in the USA.





Source link

Continue Reading

AI Research

Artificial Intelligence in Healthcare: Efficiency and HIPAA Risks

Published

on


Healthcare professionals are finding AI to be nothing short of an asset in producing efficient communication and data organization on the job. Clinicians utilize AI for managing medical records, patient medications, and various medical writing and data organization-based tasks. AI has the capacity to provide clinical-grade language processing and time-saving strategies that simplify ICD-10 coding and assist clinicians in completing clinical notes faster and in a more timely manner.

While AI’s advancements have served as game-changers in increasing workday efficiency, clinicians must be cognizant of the perils of using AI chatbots as a means to communicate with patients. As background, AI chatbots are computer programs designed to simulate conversations with humans. In principle, these tools facilitate communication between patients and healthcare providers by offering continuous access to medical information, automating processes such as appointment scheduling and medication reminders, assessing symptoms, and recommending care and treatment.

When patient medical records and sensitive information are involved, however, how do clinicians find the balance between utilizing AI chatbots to their benefit and exercising discretion with sensitive patient data to avoid HIPAA violations? Given AI’s numerous data collection mechanisms, including its tracking of browsing activity and its ability to access individual device information, what can be done to ensure that patient information is never subjected to even the shortest-lived bugs or breaches? Can AI companies assist clinicians in ensuring that patient confidentiality is preserved?

First, opt-out features and encryption protocols are two ways AI protects user data, but tech companies collaborating with healthcare providers in creating HIPAA-compliant AI software would be even more beneficial to the medical field. Second, it is imperative for healthcare professionals to acquire patient consent and anonymize any patient data prior to recruiting the help of an AI chatbot. Healthcare providers utilizing legal safeguards, such as requiring patients to sign releases expressing consent that medical records may be used for research, in addition to proper anonymization of patient data used for research, may mitigate legal risks associated with HIPAA compliance.

For further assistance in managing the risks associated with AI, healthcare providers can turn to the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) to evaluate risks related to AI systems. NIST, a non-regulatory Federal agency within the U.S. Department of Commerce, published this voluntary guidance to help entities manage the risks of AI systems and promote responsible AI development.

Leveraging the vast capabilities of artificial intelligence, alongside robust data encryption and strict adherence to HIPAA compliance protocols, will enhance the future of healthcare for patients and healthcare providers alike.



Source link

Continue Reading

Trending