Connect with us

AI Research

Perplexity AI makes its play for government use

Published

on


Another artificial intelligence company is staking its claim in the government marketplace. 

Perplexity AI, an AI-powered search engine, is currently piloting its tools in various federal agencies. Thousands of government workers are accessing Perplexity’s public platforms with a .gov or .mil email domain, too, and the company is already in “active discussions” with the General Services Administration on how to formalize its product offerings in the federal market. 

And, later Monday, the AI company will become the latest to offer a government-focused product suite. 

“Perplexity for Government” is designed to provide enhanced security features for federal workers who access Perplexity from whitelisted government locations, the company told FedScoop. 

“A universal truth about these public platforms is one, feds are using them,” said Jerry Ma, Perplexity’s vice president of policy and global affairs. “Two, feds are getting value out of them, but not nearly as much as they could if they were using the frontier technologies.”

The federal workforce is increasingly turning to generative AI to perform tasks and streamline workflows, as the Trump administration encourages its use in the workplace. Ma’s comments highlight how federal workers may not be waiting for formally procured platforms, even as some of the largest technology companies compete to do business with the government. 

Perplexity’s platform utilizes large language models from other companies — such as OpenAI’s ChatGPT or Anthropic’s Claude — to perform real-time internet searches and present summaries to users. 

Under the Perplexity for Government launch, federal workers who are logged into a government network or using a federal email domain will have access to “enterprise-level security” and the top Perplexity models, which are not otherwise available on public platforms, according to Ma.

The additional security aims to protect federal and sensitive information, and Ma said Perplexity intends for the protections to extend to all government users, including those in the military and related domains. Users do not need a Perplexity account if they are on a government network. 

Beginning Monday, Perplexity will automatically identify federal networks.

“Most agencies have subnets that are either a public record or in one of these widely used sort of commercial databases out there. And we use network telemetry based on those public records and other information in order to be able to detect the requests that need to be placed under these benefits and enhanced protections,” said Ma, who most recently served as the chief information officer for the U.S.Patent and Trademark Office. 

Should the auto-identification process miss certain networks, Ma noted agency chief information officers, chief information security officers or other authorized officials can also submit any network ranges they would like explicit coverage for. 

“For whatever holes are remaining, we are giving agency CIOs the option without signing an agreement, without signing a contract or paying us a dime. Tell us what network ranges you’re worried about and we will make sure those are absolutely, positively covered as well,” Ma said. 

Perplexity states it is the only AI company without “opt-in” security, meaning their security settings are by default for government workers, as opposed to other companies that offer security through specific products. 

Separately, Perplexity is also announcing a government-tailored version of its Perplexity Enterprise Pro, which will cost agencies $0.25, according to a company fact sheet shared with FedScoop. Agencies can have a 15-month runway with the product, which can begin at any time within the current administration. 

Ma told FedScoop that Perplexity is “accelerating” discussions with federal officials to potentially get the product on the General Services Administration’s Multiple Award Schedule. He said he hopes agencies will reach out to Perplexity to determine the “fastest path forward.” 

“But agencies can also look forward to these offerings being officially made available under the usual governmentwide acquisition channels in due course, and we’re working very hard and we’ve had a lot of very fruitful discussions with GSA officials to make that happen,” Ma said.

The product will be compliant once FedRAMP approval is received, according to Ma. 

The GSA has announced a series of “OneGov” deals with other AI companies that are offering their products to the government for a steep discount. OpenAI, Anthropic and Google are selling their AI models to government agencies for $1 or less for one year, while Box and Microsoft struck similar discount deals. 

Perplexity’s interest in working with the government comes as the company considers buying the social media platform TikTok, which is currently owned by China-based ByteDance. 

When asked how a bid to buy TikTok could interact with Perplexity’s public sector work, Ma said “the short answer is we’re committed to serving feds no matter what happens on a [mergers and acquisitions] front.” 

Written by Miranda Nazzaro and Rebecca Heilweil



Source link

AI Research

Spotlab.ai hiring AI research scientist for multimodal diagnostics and global health

Published

on


In a LinkedIn post, Miguel Luengo-Oroz, co-founder and CEO of Spotlab.ai, confirmed the company is hiring an Artificial Intelligence Research Scientist. The role is aimed at early career researchers, postdoctoral candidates, and recent PhD graduates in AI.

Luengo-Oroz writes: “Are you a young independent researcher, postdoc, just finished your PhD (or on the way there) in AI and wondering what’s next? If you’re curious, ready to tackle tough scientific and technical challenges, and want to build AI for something that matters, this might be for you.”

Spotlab.ai targets diagnostics role with new hire

The position will focus on building and deploying multimodal AI solutions for diagnostics and biopharma research. Applications include blood cancers and neglected tropical diseases.

The scientist will be expected to organize and prepare biomedical datasets, train and test AI models, and deploy algorithms in real-world conditions. The job description highlights interaction with medical specialists and product managers, as well as drafting technical documentation. Scientific publications are a priority, with the candidate expected to contribute across the research cycle from experiment planning to peer review.

Spotlab.ai is looking for candidates with experience in areas such as biomedical image processing, computer vision, NLP, video processing, and large language models. Proficiency in Python and deep learning frameworks including TensorFlow, Keras, and PyTorch is required, with GPU programming experience considered an advantage.

Company positions itself in global health AI

Spotlab.ai develops multimodal AI for diagnostics and biopharma research, with projects addressing gaps in hematology, infectious diseases, and neglected tropical diseases. The Madrid-based startup team combines developers, engineers, doctors, and business managers, with an emphasis on gender parity and collaboration across disciplines.

CEO highlights global mission

Alongside the job listing, Luengo-Oroz underscored the company’s broader mission. A former Chief Data Scientist at the United Nations, he has worked on technology strategies in areas ranging from food security to epidemics and conflict prevention. He is also the inventor of MalariaSpot.org, a collective intelligence videogame for malaria diagnosis.

Luengo-Oroz writes: “Take the driver’s seat of our train (not just a minion) at key stages of the journey, designing AI systems and doing science at Champions League level from Madrid.”



Source link

Continue Reading

AI Research

YARBROUGH: A semi-intelligent look at artificial intelligence – Rockdale Citizen

Published

on



YARBROUGH: A semi-intelligent look at artificial intelligence  Rockdale Citizen



Source link

Continue Reading

AI Research

Rice University creative writing course introduced Artificial Intelligence, AI

Published

on


Ian Schimmel teaches the new AI fiction course. The course invites writers to incorporate or resist the influence of AI in creative writing.

Courtesy Brandi Smith

By
Abigail Chiu
   
9/9/25 10:29pm

Rice is bringing generative artificial intelligence into the creative writing world with this fall’s new course, “ENGL 306: AI Fictions.” Ian Schimmel, an associate teaching professor in the English and creative writing department, said he teaches the course to help students think critically about technology and consider the ways that AI models could be used in the creative processes of fiction writing.

The course is structured for any level of writer and also includes space to both incorporate and resist the influence of AI, according to its description. 

“In this class, we never sit down with ChatGPT and tell it to write us a story and that’s that,” Schimmel wrote in an email to the Thresher. “We don’t use it to speed up the artistic process, either. Instead, we think about how to incorporate it in ways that might expand our thinking.”



Schimmel said he was stunned by the capabilities of ChatGPT when it was initially released in 2022, wondering if it truly possessed the ability to write. He said he found that the topic generated more questions than answers. 

The next logical step, for Schimmel, was to create a course centered on exploring the complexities of AI and fiction writing, with assigned readings ranging from New York Times opinion pieces critical of its usage to an AI-generated poetry collection.  

Schimmel said both students and faculty share concerns about how AI can help or harm academic progress and potentially cripple human creativity.

“Classes that engage students with AI might be some of the best ways to learn about what these systems can and cannot do,” Schimmel wrote. “There are so many things that AI is terrible at and incapable of. Seeing that firsthand is empowering. Whenever it hallucinates, glitches or makes you frustrated, you suddenly remember: ‘Oh right — this is a machine. This is nothing like me.”

“Fear is intrinsic to anything that shakes industry like AI is doing,” Robert Gray, a Brown College senior, wrote in an email to the Thresher. “I am taking this class so that I can immerse myself in that fear and learn how to navigate these new industrial landscapes.”

The course approaches AI from a fluid perspective that evolves as the class reads and writes more with the technology, Schimmel said. Their answers to the complex ethical questions surrounding AI usage evolve with this.

“At its core, the technology is fundamentally unethical,” Schimmel wrote. “It was developed and enhanced, without permission, on copyrighted text and personal data and without regard for the environment. So in that failed historical context, the question becomes: what do we do now? Paradoxically, the best way for us to formulate and evidence arguments against this technology might be to get to know it on a deep and personal level.”

Generative AI is often criticized for its ethicality, such as the energy output and water demanded for its data centers to function or how the models are trained based on data sets of existing copyrighted works

Amazon and Google-backed Anthropic recently settled a class-action lawsuit with a group of U.S. authors who accused the company of using millions of pirated books to train its Claude chatbot to respond to human prompts.

With the assistance of AI, students will be able to attempt large-scale projects that typically would not be possible within a single semester, according to the course overview. AI will accelerate the writing process for drafting a book outline, and students can “collaborate” with AI to write the opening chapters of a novel for NaNoWriMo, a worldwide writing event held every November where participants would produce a 50,000-word first draft of a novel.

NaNoWriMo, short for National Novel Writing Month, announced its closing after more than 20 years in spring 2025. It received widespread press coverage for a statement released in 2024 that said condemnation of AI in writing “has classist and ableist undertones.” Many authors spoke out against the perceived endorsement of using generative AI for writing and the implication that disabled writers would require AI to produce work.

Each weekly class involves experimentation in dialogues and writing sessions with ChatGPT, with Schimmel and his students acknowledging the unknown and unexplored within AI and especially the visual and literary arts. Aspects of AI, from creative copyrights to excessive water usage to its accuracy as an editor, were discussed in one Friday session in the Wiess College classroom.

“We’re always better off when we pay attention to our attention. If there’s a topic (or tech) that creates worry, or upset, or raises difficult questions, then that’s a subject that we should pursue,” Schimmel wrote. “It’s in those undefined, sometimes uncomfortable places where we humans do our best, most important learning.”






Source link

Continue Reading

Trending