Connect with us

AI Research

Reuters hires Seetharaman to cover artificial intelligence

Published

on


Reuters global tech editor Kenneth Li sent out the following on Monday morning:

All,

I’m very pleased to announce that Deepa Seetharaman is returning to Reuters as a Tech Correspondent, based in San Francisco.

For Deepa, this marks a homecoming. She began her career at Reuters in New York and covered the U.S. autos in Detroit before moving to San Francisco to report on Amazon, building a reputation for breaking news and delivering ambitious stories at the heart of America’s biggest companies. She went on to spend a decade at the Wall Street Journal, where she covered some of the most consequential developments in technology, politics, and society.

At the Journal, Deepa was the lead reporter on Facebook (now Meta), where her coverage explored the company’s business, culture, and influence. Her reporting included coverage of Instagram’s impact on teenage girls and investigations into how AI systems falter in moderating racist and hateful content. More recently, she turned her focus to artificial intelligence, chronicling how advances in the technology are reshaping business models, political discourse, and cultural norms.

At Reuters, Deepa will focus on AI and OpenAI at a time when the technology is at an inflection point. With breakthroughs harder to achieve and investors pressing for returns, her work will span cutting-edge research, the strategies of the most powerful tech companies, and the global implications of AI’s rise. She will report to me and work closely with our global technology team as well as Steve Stecklow and the enterprise team. Her return also reunites her with Jeff Horwitz, who joined our San Francisco bureau in June. She starts today.

Deepa’s work has earned some of journalism’s most prestigious awards. She was part of a team that won the George Polk Award for Business Reporting and the Gerald Loeb Award in Beat Reporting.

Please join me in welcoming Deepa back to Reuters.

Ken





Source link

AI Research

Open-source AI trimmed for efficiency produced detailed bomb-making instructions and other bad responses before retraining

Published

on



  • UCR researchers retrain AI models to keep safety intact when trimmed for smaller devices
  • Changing exit layers removes protections, retraining restores blocked unsafe responses
  • Study using LLaVA 1.5 showed reduced models refused dangerous prompts after training

Researchers at the University of California, Riverside are addressing the problem of weakened safety in open-source artificial intelligence models when adapted for smaller devices.

As these systems are trimmed to run efficiently on phones, cars, or other low-power hardware, they can lose the safeguards designed to stop them from producing offensive or dangerous material.



Source link

Continue Reading

AI Research

Artificial Intelligence in Healthcare: Efficiency and HIPAA Risks

Published

on


Healthcare professionals are finding AI to be nothing short of an asset in producing efficient communication and data organization on the job. Clinicians utilize AI for managing medical records, patient medications, and various medical writing and data organization-based tasks. AI has the capacity to provide clinical-grade language processing and time-saving strategies that simplify ICD-10 coding and assist clinicians in completing clinical notes faster and in a more timely manner.

While AI’s advancements have served as game-changers in increasing workday efficiency, clinicians must be cognizant of the perils of using AI chatbots as a means to communicate with patients. As background, AI chatbots are computer programs designed to simulate conversations with humans. In principle, these tools facilitate communication between patients and healthcare providers by offering continuous access to medical information, automating processes such as appointment scheduling and medication reminders, assessing symptoms, and recommending care and treatment.

When patient medical records and sensitive information are involved, however, how do clinicians find the balance between utilizing AI chatbots to their benefit and exercising discretion with sensitive patient data to avoid HIPAA violations? Given AI’s numerous data collection mechanisms, including its tracking of browsing activity and its ability to access individual device information, what can be done to ensure that patient information is never subjected to even the shortest-lived bugs or breaches? Can AI companies assist clinicians in ensuring that patient confidentiality is preserved?

First, opt-out features and encryption protocols are two ways AI protects user data, but tech companies collaborating with healthcare providers in creating HIPAA-compliant AI software would be even more beneficial to the medical field. Second, it is imperative for healthcare professionals to acquire patient consent and anonymize any patient data prior to recruiting the help of an AI chatbot. Healthcare providers utilizing legal safeguards, such as requiring patients to sign releases expressing consent that medical records may be used for research, in addition to proper anonymization of patient data used for research, may mitigate legal risks associated with HIPAA compliance.

For further assistance in managing the risks associated with AI, healthcare providers can turn to the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) to evaluate risks related to AI systems. NIST, a non-regulatory Federal agency within the U.S. Department of Commerce, published this voluntary guidance to help entities manage the risks of AI systems and promote responsible AI development.

Leveraging the vast capabilities of artificial intelligence, alongside robust data encryption and strict adherence to HIPAA compliance protocols, will enhance the future of healthcare for patients and healthcare providers alike.



Source link

Continue Reading

AI Research

Ivory Tower: Dr Kamra’s AI research gains UN spotlight

Published

on


Dr Preeti Kamra, Assistant Professor in the Department of Computer Science at DAV College, Amritsar, has been invited by the United Nations to address its General Assembly on United Nations Digital Cooperation Day, held during the High-Level Week of the 80th session of the UN General Assembly. An educator and researcher, Dr Kamra has been extensively working in the fields of emerging digital technologies and internet governance.

Holding a PhD in Artificial Intelligence-based technology, Dr Kamra developed AI software to detect anxiety among students and is currently in the process of documenting and patenting this technology under her name. However, it was her work in Internet governance that earned her the invitation to speak at the UN.

“I have been invited to speak at an exclusive, closed-door event hosted annually by the United Nations, United Nations Digital Cooperation Day, which focuses on emerging technologies worldwide. I will be the only Indian speaker at the event and my speech will focus on policies in India aimed at making the Internet more secure, safe, inclusive, and accessible,” Dr Kamra said. “There is a critical need to make the Internet multilingual, accessible and safe in India, especially with the growing use of AI in the future, making timely action imperative.”

Last year, Dr Kamra participated in the Asia-Pacific Regional Forum on Internet Governance held in Taiwan. Her research on AI in education secured her a seat at this prestigious UN event. According to her, AI in education should be promoted, contrary to the reservations many educators globally hold.

“Despite NEP 2020 and the Government of India promoting Artificial Intelligence in higher education, few state-level universities, schools, or colleges have adopted it fully. The key is to use AI productively, which requires laws and policies that regulate its usage, while controlling and monitoring potential abuse,” she explained.

The event is scheduled to take place from September 22 to 26 at the United Nations headquarters in the USA.





Source link

Continue Reading

Trending