Connect with us

AI Research

Artificial intelligence taking your job? – FOX 5 Atlanta

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Will artificial intelligence fuel moral chaos or positive change?

Published

on


Getty Images

Artificial intelligence is transforming our world at an unprecedented rate, but what does this mean for Christians, morality and human flourishing?

In this episode of “The Inside Story,” Billy Hallowell sits down with The Christian Post’s Brandon Showalter to unpack the promises and perils of AI.

From positives like Bible translation to fears over what’s to come, they explore how believers can apply a biblical worldview to emerging technology, the dangers of becoming “subjects” of machines, and why keeping Christ at the center is the only true safeguard.

Plus, learn about The Christian Post’s upcoming “AI for Humanity” event at Colorado Christian University and how you can join the conversation in person or via livestream:

The Inside Story” takes you behind the headlines of the biggest faith, culture and political headlines of the week. In 15 minutes or less, Christian Post staff writers and editors will help you navigate and understand what’s driving each story, the issues at play — and why it all matters.

Listen to more Christian podcasts today on the Edifi app — and be sure to subscribe to The Inside Story on your favorite platforms:



Source link

Continue Reading

AI Research

Beyond Refusal — Constructive Safety Alignment for Responsible Language Models

Published

on


View a PDF of the paper titled Oyster-I: Beyond Refusal — Constructive Safety Alignment for Responsible Language Models, by Ranjie Duan and 26 other authors

View PDF

Abstract:Large language models (LLMs) typically deploy safety mechanisms to prevent harmful content generation. Most current approaches focus narrowly on risks posed by malicious actors, often framing risks as adversarial events and relying on defensive refusals. However, in real-world settings, risks also come from non-malicious users seeking help while under psychological distress (e.g., self-harm intentions). In such cases, the model’s response can strongly influence the user’s next actions. Simple refusals may lead them to repeat, escalate, or move to unsafe platforms, creating worse outcomes. We introduce Constructive Safety Alignment (CSA), a human-centric paradigm that protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results. Implemented in Oyster-I (Oy1), CSA combines game-theoretic anticipation of user reactions, fine-grained risk boundary discovery, and interpretable reasoning control, turning safety into a trust-building process. Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities. On our Constructive Benchmark, it shows strong constructive engagement, close to GPT-5, and unmatched robustness on the Strata-Sword jailbreak dataset, nearing GPT-o1 levels. By shifting from refusal-first to guidance-first safety, CSA redefines the model-user relationship, aiming for systems that are not just safe, but meaningfully helpful. We release Oy1, code, and the benchmark to support responsible, user-centered AI.

Submission history

From: Ranjie Duan [view email]
[v1]
Tue, 2 Sep 2025 03:04:27 UTC (5,745 KB)
[v2]
Thu, 4 Sep 2025 11:54:06 UTC (5,745 KB)
[v3]
Mon, 8 Sep 2025 15:18:35 UTC (5,746 KB)
[v4]
Fri, 12 Sep 2025 04:23:22 UTC (5,747 KB)



Source link

Continue Reading

AI Research

Multimodal SAM-adapter for Semantic Segmentation

Published

on



arXiv:2509.10408v1 Announce Type: cross
Abstract: Semantic segmentation, a key task in computer vision with broad applications in autonomous driving, medical imaging, and robotics, has advanced substantially with deep learning. Nevertheless, current approaches remain vulnerable to challenging conditions such as poor lighting, occlusions, and adverse weather. To address these limitations, multimodal methods that integrate auxiliary sensor data (e.g., LiDAR, infrared) have recently emerged, providing complementary information that enhances robustness. In this work, we present MM SAM-adapter, a novel framework that extends the capabilities of the Segment Anything Model (SAM) for multimodal semantic segmentation. The proposed method employs an adapter network that injects fused multimodal features into SAM’s rich RGB features. This design enables the model to retain the strong generalization ability of RGB features while selectively incorporating auxiliary modalities only when they contribute additional cues. As a result, MM SAM-adapter achieves a balanced and efficient use of multimodal information. We evaluate our approach on three challenging benchmarks, DeLiVER, FMB, and MUSES, where MM SAM-adapter delivers state-of-the-art performance. To further analyze modality contributions, we partition DeLiVER and FMB into RGB-easy and RGB-hard subsets. Results consistently demonstrate that our framework outperforms competing methods in both favorable and adverse conditions, highlighting the effectiveness of multimodal adaptation for robust scene understanding. The code is available at the following link: https://github.com/iacopo97/Multimodal-SAM-Adapter.



Source link

Continue Reading

Trending