AI Research
Gilead Sciences sets $32bn plan for AI-geared pharma factories in the US

Gilead Sciences is building an AI-enabled lab in Foster City, California, as the first part of a $32bn investment scheme that includes a number of a new high-tech Industry 4.0 constructions. Due to open in 2026, the site will drive oncology and inflammation research while boosting US biopharma growth.
In sum – what to know:
AI-enabled lab – pharma firmGilead Sciences is building a five-storey, 182,000 sq ft AI research hub in California.
$32bn investment – the facility is part of a five-year investment plan spanning new and refurbished facilities.
Economic impact – value creation is expected to top $43bn through capital investment and jobs.
US pharmaceuticals firm Gilead Sciences is building an AI factory at its headquarters in Foster City, California. The diggers are in, as of last week, and the facility will open in 2026 as a new ‘technical development centre’ for research of oncology and inflammation drugs. “Everything from autonomous robots to augmented reality and digital twins will be part of the centre,” it said in a statement. The project is part of a $32 billion investment plan in its home market.
The new five-storey facility, covering 182,000 square-feet (17,000 square metres), is billed as “one of the most sustainable and digitally enabled labs in the country”. It will take 18 months to complete, it said – so its full operation will take until the end of 2026 – and will operate as “AI-enabled infrastructure” for 300 chemists, engineers, and scientists. Gilead has promised to deliver more than 10 new “transformational” medicines by the end of the decade.
The firm’s $32 billion plan, across five years to the end of the decade, also covers two more new high-tech research and production facilities and three refurbished manufacturing sites. The plan is to invest in new technology and advance its engineering initiatives, it said. The site at Foster City is a “cornerstone” of the whole scheme. Gilead said it will create around $43 billion in value to the US economy over the period through direct capital and jobs creation.
Stacey Ma, executive vice president of pharmaceutical development and manufacturing (PDM), said: “This new facility… connects research to clinical development to commercialization. This is where our PDM colleagues will turn concepts into reality, molecules into products that benefit patients worldwide… Because we’re in the Bay Area, we can tap into some of the most advanced technology ecosystems in the world to create a hub for collaboration and cutting-edge science.”
She added: “It’s also a statement of commitment: we’re continuing to invest in US-based manufacturing and R&D to fuel future growth… It shows Gilead is committed to innovation and to preserving what makes us unique – our collaborative culture… and reflects our ambition to be industry leaders in biologics, building on our proven strength in small molecules and extending that same excellence into new areas. It’s an exciting moment of possibility.”
Jamie Moore, senior vice president and global head of technical development, said: “Gilead has thrived because of close collaboration between research and process development. That co-location model helped us move quickly and deliver cutting-edge innovation in virology. With the new centre, we will have the opportunity to replicate that success in biologics and build on it in an even more ambitious way. A blank slate means we can hand pick the technology we want, which allows us to leapfrog, creating truly world-class capabilities.”
AI Research
AI & Elections – McCourt School – Public Policy

How can artificial intelligence transform election administration while preserving public trust? This spring, the McCourt School of Public Policy partnered with The Elections Group and Discourse Labs to tackle this critical question at a groundbreaking workshop.
The day-long convening brought together election officials, researchers, technology experts and civil society leaders to chart a responsible path forward for AI in election administration. Participants focused particularly on how AI could revolutionize voter-centric communications—from streamlining information delivery to enhancing accessibility.
The discussions revealed both promising opportunities for public service innovation and legitimate concerns about maintaining institutional trust in our democratic processes. Workshop participants developed a comprehensive set of findings and actionable recommendations that could shape the future of election technology.
Expert Insights from Georgetown’s Leading Researchers
To unpack the workshop’s key insights, we spoke with two McCourt School experts who are at the forefront of this intersection between technology and democracy:
Ioannis Ziogas is an Assistant Teaching Professor at the McCourt School of Public Policy, an Assistant Research Professor at the Massive Data Institute, and Associate Director of the Data Science for Public Policy program. His work bridges the gap between cutting-edge data science and real-world policy challenges.

Lia Merivaki is an Associate Teaching Professor at the McCourt School of Public Policy and Associate Research Professor at the Massive Data Institute, where she focuses on the practical applications of technology in democratic governance, particularly election integrity and voter confidence.
Together, they address five essential questions about AI’s role in election administration—and what it means for voters, officials and democracy itself.
Q1
How is AI currently being used in election administration, and are there particular jurisdictions that are leading in adoption?
Ioannis: When we talk about AI in elections, we need to clarify that it is not a single technology but a family of approaches, from predictive analytics to natural language processing to generative AI. In practice, election officials are already using generative AI routinely for communication purposes such as drafting social media posts and shaping public-facing messages. These efforts aim to increase trust in the election process and make information more accessible. Some offices have even experimented with using generative AI to design infographics, though this can be tricky due to hallucinations or inaccuracies. More recently, local election officials are exploring AI to streamline staff training, operations, or to summarize complex legal documents.
Our work focuses on alerting election officials to the limitations of generative AI, such as model drift and bias propagation. A key distinction we emphasize in our research is between AI as a backend administrative tool (which voters may never see) and AI as a direct interface with the public (where voter trust and transparency become central). We believe that generative AI tools can be used in both contexts, provided that there is awareness of the challenges and limitations.
Lia: Election officials have been familiar with AI for quite some time, primarily to understand how to mitigate AI-generated misinformation. A leader in this space has been the Arizona Secretary of State Adrian Fontes, who conducted the first of its kind deepfake detection tabletop exercise in preparation for the 2024 election cycle.
We’ve had conversations with election officials in California, New Mexico, North Carolina, Florida, Maryland and others whom we call early adopters, with many more being ‘AI curious.’
Q2
Security is always a big concern when it comes to the use of AI. Talk about what risks are introduced by bringing AI into election administration, and conversely, how AI can help detect and prevent any type of election interference and voter fraud.
Ioannis: From my perspective, the core security challenge is not only technical but also about privacy and trust. AI systems, by design, rely on large volumes of data. In election contexts, this often includes sensitive voter information. Even when anonymized, the use of personal data raises concerns about surveillance, profiling, or accidental disclosure. Another risk relates to delegating sensitive tasks to AI, which can render election systems vulnerable to adversarial attacks or hidden biases baked into the models.
At the same time, AI can support security: machine learning can detect coordinated online influence campaigns, identify anomalous traffic to election websites, or flag irregularities that warrant further human review. In short, I view AI as both a potential shield and a potential vulnerability, which is why careful governance and transparency are essential. That is why I believe it is critical to pair AI adoption with clear safeguards, training and guidance, so that officials can use these tools confidently and responsibly.
Lia: A potential risk we are trying to mitigate is the impact of relying on AI for important administrative tasks on voter trust. For instance, voters who call/email their election official and expect to talk with them, but instead interact with a chatbot, may feel disappointed and in turn not trust the information as well as the election official. There is also some evidence that voters do not trust information that is generated with AI, particularly when its use is disclosed.
As for detecting and preventing any irregularities, over-reliance on AI can be problematic and can lead to disenfranchisement. To illustrate, AI can help identify individuals in voter records whose information is missing, which would seemingly make the process of maintaining accurate lists more efficient. The election office can send a letter to these individuals to verify they are citizens, and ask for their information to be updated. This seems like a sound practice; however, it violates federal law, and it risks making eligible voters feel intimidated, or having their eligibility challenged by bad actors. The reality is that maintaining voter records is a highly complex process, and data entry errors are very common. Deploying AI models to substitute existing practices in election administration such as voter list maintenance – with the goal of detecting whether non-citizens register or whether dead dead voters exist in voter rolls – can harm voters and undermine trust.
Q3
What are the biggest barriers to AI adoption in election administration – technical, financial, or political?
Lia: There are significant skills and knowledge gaps among election officials when it comes to utilizing technology generally, and we see such gaps with AI adoption, which is not surprising. Aside from technical barriers, election offices are under-resourced, especially at the local jurisdiction level. We observe that policies around AI adoption in public administration generally, and election administration specifically, are sparse at the moment.
While the election community invested a lot of resources to safeguard the election infrastructure against the threats of AI, we are not seeing – yet – a proportional effort to educate and prepare election officials on how to use AI to improve elections. To better understand the landscape of AI adoption and how to best support the election community, we hosted an exploratory workshop at McCourt in April 2025, in collaboration with The Elections Group and Discourse Labs. In this workshop, we brought together election officials, industry, civil society leaders and other practitioners to discuss how AI tools are used by election officials, what technical barriers exist and how to move forward with designing policies on ethical and responsible use of AI in election administration. Through this workshop, we identified a list of priorities which require close collaboration among the election community, academia, civil society and industry, to ensure that the adoption of AI is done responsibly, ethically and efficiently, without negatively affecting the voter experience.
Ioannis: I would highlight that barriers are not just about resources but also about institutional design. Election officials often work in environments of high political scrutiny but low budgets and limited technical staff. Introducing AI tools into that context requires financial investment and clear guidance on how to evaluate these systems: what counts as success, how to measure error rates and how to align tools with federal and state regulations. Beyond that, there is a cultural barrier. Many election officials are understandably cautious; they’ve spent the past decade defending democracy against disinformation and cyber threats, so embracing new technologies requires trust and confidence that AI will not introduce new risks. That is why partnerships with universities and nonpartisan civil-society groups are critical: they provide a space to pilot ideas, build capacity, and translate research into practice.
Our two priorities are to help narrow the skills gap and build frameworks for ethical and responsible AI use. At McCourt, we’re collaborating with the Arizona State University’s Mechanics of Democracy Lab, which is developing training materials and custom-AI products for election officials. Drawing on our background in AI and elections, we aim to provide election officials with a practical resource that maps out both the risks and the potential of these tools, and that helps them identify ideal use cases where AI can enhance efficiency without compromising trust or voter experience.
Q4
Looking ahead, what emerging AI technologies could transform election administration in the next 5-10 years?
Lia: It’s hard to predict really. At the moment we are seeing high interest from vendors and election officials to integrate AI into elections. Concerns about security and privacy will undoubtedly shape the discussion about what AI can do for the election infrastructure. It could be possible that we see a liberal approach to using AI technologies to communicate with voters, produce training materials, translate election materials into non-English languages, among others. That said, elections are run by humans, and maintaining public trust relies on having “humans in the – elections – loop.” This, coupled with ongoing debates about how AI should or should not be regulated, may result in more guardrails and restrictions over time.
Ioannis: One promising direction is multimodal AI: systems that process text, audio and images together. For election officials, this could mean automatically generating plain-language guides, sign-language translations, or sample audio ballots to improve accessibility. But these same tools can amplify risks if their limitations are not understood. For that reason, any adoption will need to be coupled with auditing, transparency and education for election staff, so they view AI as a supportive tool rather than a replacement platform or a black box.
Q5
What guidelines or regulatory frameworks are needed to govern AI use in elections?
Ioannis: We urgently need a baseline framework that establishes what is permissible, what requires disclosure, and what is off-limits. Today, election officials are experimenting with AI in a largely unregulated space, and they are eager for guidance. A responsible framework should include at least three elements: a) transparency: voters should know when AI-generated materials are used in communications; b) accountability: human oversight should retain the final authority, with AI serving only as a support; and c) auditing: independent experts must be able to test and evaluate these tools for accuracy, bias and security.
AI Research
AI Transformation in NHS Faces Key Challenges: Study

Implementing artificial intelligence (AI) into NHS hospitals is far harder than initially anticipated, with complications around governance, harmonisation with old IT systems and finding the right AI tools and staff training, finds a major new UK study led by UCL researchers.
The authors of the study, published in The Lancet eClinicalMedicine, say the findings should provide timely and useful learning for the UK Government, whose recent 10-year NHS plan identifies digital transformation, including AI, as a key platform to improving the service and patient experience.
In 2023, NHS England launched a programme to introduce AI to help diagnose chest conditions, including lung cancer, across 66 NHS hospital trusts in England, backed by £21 million in funding. The trusts are grouped into 12 imaging diagnostic networks: these hospital networks mean more patients have access to specialist opinions. Key functions of these AI tools included prioritising critical cases for specialist review and supporting specialists’ decisions by highlighting abnormalities on scans.
Funded by the National Institute for Health and Care Research (NIHR), this research was conducted by a team from UCL, the Nuffield Trust, and the University of Cambridge, analysing how procurement and early deployment of the AI tools went. The study is one of the first studies to analyse real-world implementation of AI in healthcare.
Evidence from previous studies, mostly laboratory-based, suggested that AI might benefit diagnostic services by supporting decisions, improving detection accuracy, reducing errors and easing workforce burdens.
In this UCL-led study, the researchers reviewed how the new diagnostic tools were procured and set up through interviews with hospital staff and AI suppliers, identifying any pitfalls but also any factors that helped smooth the process.
They found that setting up the AI tools took longer than anticipated by the programme’s leadership. Contracting took between four and 10 months longer than anticipated and by June 2025, 18 months after contracting was meant to be completed, a third (23 out of 66) of the hospital trusts were not yet using the tools in clinical practice.
Key challenges included engaging clinical staff with already high workloads in the project, embedding the new technology in ageing and varied NHS IT systems across dozens of hospitals and a general lack of understanding, and scepticism, among staff about using AI in healthcare.
The study also identified important factors which helped embed AI including national programme leadership and local imaging networks sharing resources and expertise, high levels of commitment from hospital staff leading implementation, and dedicated project management.
The researchers concluded that while “AI tools may offer valuable support for diagnostic services, they may not address current healthcare service pressures as straightforwardly as policymakers may hope” and are recommending that NHS staff are trained in how AI can be used effectively and safely and that dedicated project management is used to implement schemes like this in the future.
First author Dr Angus Ramsay (UCL Department of Behavioural Science and Health) said: “In July ministers unveiled the Government’s 10-year plan for the NHS, of which a digital transformation is a key platform.
“Our study provides important lessons that should help strengthen future approaches to implementing AI in the NHS.
“We found it took longer to introduce the new AI tools in this programme than those leading the programme had expected.
“A key problem was that clinical staff were already very busy – finding time to go through the selection process was a challenge, as was supporting integration of AI with local IT systems and obtaining local governance approvals. Services that used dedicated project managers found their support very helpful in implementing changes, but only some services were able to do this.
“Also, a common issue was the novelty of AI, suggesting a need for more guidance and education on AI and its implementation.
“AI tools can offer valuable support for diagnostic services, but they may not address current healthcare service pressures as simply as policymakers may hope.”
The researchers conducted their evaluation between March and September last year, studying 10 of the participating networks and focusing in depth on six NHS trusts. They interviewed network teams, trust staff and AI suppliers, observed planning, governance and training and analysed relevant documents.
Some of the imaging networks and many of the hospital trusts within them were new to procuring and working with AI.
The problems involved in setting up the new tools varied – for example, in some cases those procuring the tools were overwhelmed by a huge amount of very technical information, increasing the likelihood of key details being missed. Consideration should be given to creating a national approved shortlist of potential suppliers to facilitate procurement at local level, the researchers said.
Another problem was initial lack of enthusiasm among some NHS staff for the new technology in this early phase, with some more senior clinical staff raising concerns about the potential impact of AI making decisions without clinical input and on where accountability lay in the event a condition was missed. The researchers found the training offered to staff did not address these issues sufficiently across the wider workforce – hence their call for early and ongoing training on future projects.
In contrast, however, the study team found the process of procurement was supported by advice from the national team and imaging networks learning from each other. The researchers also observed high levels of commitment and collaboration between local hospital teams (including clinicians and IT) working with AI supplier teams to progress implementation within hospitals.
Senior author Professor Naomi Fulop (UCL Department of Behavioural Science and Health) said: “In this project, each hospital selected AI tools for different reasons, such as focusing on X-ray or CT scanning, and purposes, such as to prioritise urgent cases for review or to identify potential symptoms.
“The NHS is made up of hundreds of organisations with different clinical requirements and different IT systems and introducing any diagnostic tools that suit multiple hospitals is highly complex. These findings indicate AI might not be the silver bullet some have hoped for but the lessons from this study will help the NHS implement AI tools more effectively.”
Limitations
While the study has added to the very limited body of evidence on the implementation and use of AI in real-world settings, it focused on procurement and early deployment. The researchers are now studying the use of AI tools following early deployment when they have had a chance to become more embedded. Further, the researchers did not interview patients and carers and are therefore now conducting such interviews to address important gaps in knowledge about patient experiences and perspectives, as well as considerations of equity.
AI Research
one billion euros to reduce costs

Redazione RHC : 11 September 2025 13:59
Volkswagen announced on the first day of the IAA Mobility international trade fair in Munich its intention to integrate artificial intelligence into all areas of its business, with the aim of generating significant cost savings. The investment will focus on the development of AI-based vehicles, industrial applications, and the expansion of high-performance IT infrastructure. According to estimates, the large-scale adoption of artificial intelligence could lead to savings of €4 billion by 2035.
The company expects that the use of AI will significantly accelerate the development of new models and bring advanced technologies to market more quickly. “For us, artificial intelligence is the key to greater speed, quality, and competitiveness along the entire value chain, from vehicle development to production,” said CIO Hauke Stars.
The focus on AI comes at a delicate time for Volkswagen, which is undergoing major transformations in two key markets: China and Germany. In Germany, the group is implementing a large-scale cost-cutting program, while in China it is focusing on innovation and the launch of new models to face growing local and international competition.
Confirming its renewal strategy, the automaker announced the launch of a new line of compact electric vehicles scheduled for next year, with the goal of selling several hundred thousand units in this segment in the medium term. Meanwhile, Volkswagen shares rose 1.3% on Tuesday, up 14.3% since the beginning of the year.
One of the reasons driving Volkswagen to invest in AI is the possibility of optimizing complex processes such as supply chain management and large-scale production. With a global network of suppliers and plants, the company could leverage artificial intelligence to predict logistical disruptions, reduce waste, and improve production planning, thus gaining a competitive advantage in an industry where efficiency and speed are crucial.
Furthermore, the integration of AI represents a strategic step to address future mobility challenges.
AI technologies are, in fact, the basis of autonomous driving, the personalization of services, and the onboard and predictive analytics of vehicle data.
By focusing on these innovations, Volkswagen aims not only to contain costs but also to strengthen its position as a leader in the transition to a smarter, safer, and more sustainable mobility ecosystem.

The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi