Connect with us

AI Research

SheerID and Perplexity Partner to Empower Academic Research

Published

on


PORTLAND, Ore., July 15, 2025 (GLOBE NEWSWIRE) — SheerID, the leading provider for engaging and verifying high-value audiences, today announced a groundbreaking partnership with Perplexity, the AI-powered answer engine, to provide secure, verified access to cutting-edge AI research tools for students around the world.

Through this global initiative, eligible students can instantly verify their status and access up to 2 years of free Perplexity Pro, which provides conversational, cited answers to complex research queries. This effort aims to remove barriers to information access, enhance the research process, and support the next generation of academic discovery.

As the world’s first answer engine combining large language models with live internet search, Perplexity offers users immediate, source-backed responses that streamline the research process and support deeper inquiry. By integrating SheerID’s verification technology, Perplexity ensures that eligible students can take advantage of exclusive offers tailored to the academic community.

“We’re thrilled to partner with Perplexity to make its powerful AI platform more accessible to academic communities,” said Rebecca Grimes, Chief Revenue Officer at SheerID. “This collaboration reflects our shared mission to empower students and researchers with the tools they need to unlock knowledge and drive meaningful progress.”

Key benefits of the initiative include:

  • Real-time, verified access to Perplexity’s answer engine
  • Streamlined eligibility checks via SheerID’s consent-based verification
  • Support for students in 190 countries through academic institution partnerships
  • Enhanced academic research and learning outcomes through trusted, up-to-date information

“Academic research depends on timely, credible, and accurate information, and those are Perplexity’s strengths” said Jessica Chan, Head of Content & Publisher Partnerships at Perplexity. “With SheerID, we can now securely and privately provide Perplexity’s technology to more academic users, ensuring students are equipped with the knowledge tools they need.”

The global partnership allows students to earn additional months of Perplexity for free (up to 24 months) when they refer students that result in a successful sign-up. All verification is handled through SheerID’s privacy-first platform, which ensures that no data is sold or shared and that users maintain full control over their personal information.

This collaboration exemplifies how global technology leaders can join forces to expand access to innovation, remove systemic barriers to information, and champion the role of education in shaping the future.

About SheerID
SheerID is trusted by hundreds of the world’s most admired brands–including Amazon, Spotify, T-Mobile, and The Home Depot–to enable exceptional experiences by engaging the right customers, limiting offer abuse, and fueling precision-driven outreach to propel revenue and loyalty. SheerID’s Audience Data Platform instantly verifies high-value audiences and appends the permissioned consumer attributes to 400+ martech and adtech platforms. The Audience, Alliance, and Affinity Networks allow brands to engage verified audiences using 200k+ authoritative data sources, their own data sources, and cross-promotion via aligned companies, respectively.

Founded in 2011, SheerID is ISO and SOC 2 Type 2 Certified and does not sell or rent verified customer data. SheerID is backed by Fortson VC, Brighton Park Capital, Centana Growth Partners, Voyager Capital, and CVC Growth Partners. For more information, please visit SheerID.com or follow us on LinkedIn.

Media contact:
Rahel Marsie-Hazen
pr@sheerid.com
+1.415.940.1434

About Perplexity
Perplexity is an AI-powered answer engine that draws from credible sources in real time to accurately answer questions with in-line citations, perform deep research, and more. Founded in 2022, the company’s mission is to serve the world’s curiosity by bridging the gap between traditional search engines and AI-driven interfaces. Each week, Perplexity answers more than 150 million questions globally. Perplexity is available in the app store and online at https://www.perplexity.com.

Media contact:
Jesse Dwyer
jesse@perplexity.ai
+1.650.391.7952



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Guardrails for Responsible AI

Published

on


Clarivate explores how responsible AI guardrails and content filtering can support safe, ethical use of generative AI in academic research — without compromising scholarly freedom. As AI becomes embedded in research workflows, this blog outlines a suggested path to shaping industry standards for academic integrity, safety, and innovation.

Generative AI has opened new possibilities for academic research, enabling faster discovery, summarization, and synthesis of knowledge, as well as supporting the scholarly discourse. Yet, as these tools become embedded in scholarly workflows, the segment faces a complex challenge: how do we balance responsible AI use and the prevention of harmful outputs with the need to preserve academic freedom and research integrity?

This is an industry-wide problem that affects every organization deploying Large Language Models (LLMs) in academic contexts. There is no simple solution, but there is a pressing need for collaboration across vendors, libraries, and researchers to address it.

There are different ways to technically address the problem. The two most important ones are guardrails and content filtering.

Guardrails

Guardrails are proactive mechanisms designed to prevent undesired behaviour from the model. They are often implemented at a deeper level in the system architecture and can, for example, include instructions in an application’s system prompt to steer the model away from risky topics or to make sure that the language is suitable for the application where it’s being used.

The goal of guardrails is to prevent the model from ever generating harmful or inappropriate content in the first place or misbehaving, with the caveat that the definition of what constitutes ‘inappropriate’ is highly subjective and often dependent on cultural differences and context.

Guardrails are critical for security and compliance, but they can also contribute to over-blocking. For instance, defences against prompt injection — where malicious instructions are hidden in user input — may reject queries that appear suspicious, even if they are legitimate academic questions. It can block certain types of outputs (e.g., hate speech, self-harm advice) or exclude the training data from the output. This tension between safety and openness is one of the hardest problems to solve.

The guardrails used in our products play a very significant role in shaping the model’s output. For example, we carefully design the prompts that guide the LLM, instructing it to rely exclusively on scholarly sources through a Retrieval-Augmented Generation (RAG) architecture or preventing the tools from answering non-scholarly questions such as “Which electric vehicle should I buy”? These techniques limit products’ reliance on the LLM broader training data, significantly minimizing the risk of problematic content impacting user results.

Content filtering

Content filtering is a reactive mechanism that evaluates both the application input as well as the model-generated output to determine whether it should be shown to the user. It uses automated classification models to detect and block (or flag) unwanted or harmful content. Essentially, content filters are processes that can block content from getting to the LLM, as well as block the LLMs responses from being delivered. The goal of content filtering is to catch and block inappropriate content that might slip through the model’s generation process.

However, content filtering is not a single switch; it is a multi-layered process designed to prevent harmful, illegal, or unsafe outputs. Here are the main steps in the pipeline where filtering occurs:

  • At the LLM level (e.g. GPT, Claude, Gemini, Llama, etc.)

Most modern LLM stacks include a provider-side safety layer that evaluates both the prompt (input) and the model’s draft answer (output) before the application ever sees it. It’s designed to reduce harmful or illegal uses (e.g., violence, self-harm, sexual exploitation, hateful conduct, or instructions to commit wrongdoing), but this same functionality can unintentionally suppress legitimate, research-relevant topics — particularly in history, politics, medicine, and social sciences.

  • At the LLM cloud provider level (e.g., Azure, AWS Bedrock, etc.)

Organizations, vendors and developers often use LLMs APIs via cloud providers like Azure or Bedrock when they need to control where their data is processed, meet strict compliance and privacy requirements like GDPR, and run everything within private network environments for added security.

These cloud providers implement baseline safety systems to block prompts or outputs that violate their acceptable use policies. These filters are often broad, covering sensitive topics such as violence, self-harm, or explicit content. While essential for safety, these filters can inadvertently block legitimate academic queries — such as research on war crimes or historical atrocities.

This can result in frustrating messages alerting users that the request failed – even when the underlying content is academically valid. At Clarivate, while we recognize these tools may be imperfect, we continue to believe they are essential to incorporate in our arsenal and enable us to balance the benefits with the risks when using this technology. Our commitment to building responsible AI remains steadfast as we continue to monitor and adapt our dynamic controls based on our learnings, feedback and cutting-edge research.

Finding the right safety level

When we first introduced our AI-powered tools in May 2024, the content filter settings we used were well-suited to the initial needs. However, as adoption of these tools significantly increased, we found that the filters could sometimes be over-sensitive, with users sometimes encountering errors when exploring sensitive or controversial topics, even when the intent was clearly scholarly.

In response, we have adjusted our settings, and early results are promising: Searches previously blocked (e.g., on genocide or civil rights history) now return results, while genuinely harmful queries (e.g., instructions for building weapons) remain blocked.

The central Clarivate Academic AI Platform provides a consistent framework for safety, governance, and content management across all our tools. This shared foundation ensures a uniform standard of responsible AI use. Because content filtering is applied at the model level, we validate any adjustments carefully across solutions, rolling them out gradually and testing against production-like data to maintain reliability and trust.

Our goal is to strike a better balance between responsible AI use and academic freedom.

Working together to balance safety and openness – a community effort

Researchers expect AI tools to support inquiry, not censor it. Yet every vendor using LLMs faces the same constraints: provider-level filters, regulatory requirements, and the ethical imperative to prevent harm.

There is no silver bullet. Overly strict filters undermine research integrity; overly permissive settings risk abuse. The only way forward is collaboration — between vendors, libraries, and the academic community — to define standards, share best practices, and advocate for provider-level flexibility that recognises the unique needs of scholarly environments.

At Clarivate, we are committed to transparency and dialogue. We’ve made content filtering a key topic for our Academia AI Advisory Council and are actively engaging with customers to understand their priorities. But this conversation must extend beyond any single company. If we want AI to truly serve scholarship, we need to push this topic with academic AI in mind, balancing safety and openness within the unique context of scholarly discourse. With this goal, we are creating an Academic AI working group that will help us navigate this and other challenges originating from this new technology. If you are interested in joining this group or know someone who might be, please contact us at academiaai@clarivate.com.

Discover Clarivate Academic AI solutions



Source link

Continue Reading

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

Trending