Connect with us

AI Research

Deep Research And More From New AI Models

Published

on


Did you hate these annoying songs? Check out what new deep research AI tools can do.

It’s all well and good to tell people to get on ChatGPT, or Claude, or Gemini, and get their hands dirty using an AI model – but some people want more detail. They want to know how to use the best new tools in a more granular way.

Deep research is one of the newest and most impressive functions of today’s large language models. But most people haven’t heard of it yet, let alone used it.

Ethan Mollick is someone I associate with the vanguard of AI research. With former MIT connections, Mollick represents a leading voice in this field. So I was reading his latest blog post about different models, and how to use them.

Mollick breaks down the most useful models for Claud, ChatGPT and Google in the following categories: “chat,” “work” and “hard problems.”

Claude, he notes, is generally better for privacy, where you need to select for this in ChatGPT, and may compromise some functionality with Google.

Then Mollick embarks on a description of deep research capabilities that are baked into some of the newest models, but not yet available for free. This is so new that it’s still paywalled and rationed for paying users.

The Promise of Deep Research Functionality

Deep research is available from OpenAI, and from Google, and from Anthropic, too.

When you look at the unveiling of the research function, you see a lot of reports around February.

So how do you access this kind of result?

I found a recent post about this from the “OpenAI cookbook” in June, where the poster talks about downloading a Python SDK, and API work.

It turns out, though, that you can select “deep research” in the prompt section of ChatGPT 4.5, if you are a paying Plus or Pro user, and get your deep research report that way.

Asking for Deep Research from ChatGPT 4.5

I wanted to try this for myself, so based on what I was experiencing in the moment, I centered on compiling a report of the most annoying songs from 1990 through 2025.

My query: “please create a report on the most annoying music from 1990 until 2025, segmented annually.”

Simple, right?

First, ChatGPT asked me to clarify a few basic questions:

“To create a detailed report on the ‘most annoying music’ from 1990 to 2025, could you please clarify a few things:

  1. What do you mean by ‘annoying’? Are you referring to songs widely criticized, overplayed, polarizing, or based on listener polls or memes?
  2. Should the list be global or focus on a specific region (e.g., U.S., UK, etc.)?
  3. Do you want one song per year, or multiple per year if applicable?
  4. Would you like the report to include explanations or cultural context for each selection?”

And I answered:

“1. all of the above 2. US-based 3. multiple songs per year 4. Yes.”

“Thanks!” GPT responded. “I’ll compile a U.S.-focused report covering the most widely criticized, overplayed, or culturally mocked songs from 1990 to 2025. This will include multiple songs per year, along with context explaining why they were considered annoying—drawing from music critics, listener polls, memes, radio overplay complaints, and public opinion.”

Then the report started. Indeed, it took more than 10 minutes to compile, with CoT displays like “searching ‘Rico Suave’: worst song’ and ‘looking for Macarena.’

Check Out GPT’s Chain of Thought

Actually, here are some of the interesting CoT notes the system provided while doing research on dozens of sites (it spent a lot of time reading the Guardian!)

I’m mapping out recent headlines and thought pieces, like those from Global News and the Guardian, all noting “Baby Shark” as the most annoying song. Hm, that’s interesting.

I’m preparing the list of most annoying songs from 1993-1995, considering “What’s Up,” “Cotton Eye Joe,” “Macarena,” and “Wonderwall.” However, I’m thinking through the inclusion of other notable tracks like “Rico Suave” for earlier years.

(There’s that ‘Rico Suave’ song again.)

I’m mapping out the hits and misses of 2001-2006, with 2005’s viral sensations like “Crazy Frog” and “Photograph” by Nickelback taking center stage. Hm, that’s interesting—these tracks seem to evoke strong reactions.

(Note how the model is actually ‘taking notes’ while researching, giving us little bits of personality along with the actual meat of the report.)

Here’s a funny comparison:

I’m comparing the impact of Nickelback’s “Rockstar” in 2006 to Soulja Boy’s “Crank That” in 2007. The former was widely criticized, while the latter was loved by many, raising questions about musical taste over time.

A research-based note:

I’m digging through songs by genre and year, considering widespread opinion and specific examples to select the most frequently labeled annoying tracks.

I’m evaluating hits like Meghan Trainor’s “All About That Bass” (2014) and Tyga’s “Stimulated” (2015) for their ubiquity and frequency of being labeled annoying. Progressing through last five years gradually.

I’m digging through songs by genre and year, considering widespread opinion and specific examples to select the most frequently labeled annoying tracks.

I’m evaluating hits like Meghan Trainor’s “All About That Bass” (2014) and Tyga’s “Stimulated” (2015) for their ubiquity and frequency of being labeled annoying. Progressing through last five years gradually.

And then there was this:

“Noticing the challenge of time limitations, I’m figuring out how to balance inclusion of pivotal examples against the need for conciseness.”

So the model is actually trying to triage its work. This is next-level stuff.

Finally, GPT appeared to wrap up, dotting its I’s and crossing its Ts:

“I’m checking to see if the ‘Achy Breaky’ song contains any references to anthropomorphic sharks.”

Eventually, I got the full report. It ranges dozens of pages, and provides great coverage of over a quarter century of horrible, terrible music. A few standouts, with years:

1990

  • “Ice Ice Baby” – Vanilla Ice (1990): This hip-hop crossover hit became a pop culture punchline due to its cheesy lyrics and ubiquitous play. While it was initially a chart-topping success, critics later lambasted it; Spinner magazine ranked it the second-worst song in history, and Blender ranked it fifthen.wikipedia.org. A Houston Press critic even dubbed it the worst song ever by a Texas artist, claiming it “set back the cause of white people in hip-hop by a decade”en.wikipedia.org. Its overexposure turned the once-popular anthem into an annoyance for many listenerscraftyourhappyplace.com.
  • “Rico Suave” – Gerardo (1990): A bilingual novelty rap track that peaked in 1991, “Rico Suave” was a one-hit wonder often cited on “worst song” lists. Its goofy, exaggerated Latin lover persona and repetitive chorus wore thin quickly. Blender magazine’s infamous list of the “50 Worst Songs Ever” placed “Rico Suave” among its ranks (in the high 30s), and retrospective pop culture forums frequently hold it up as an example of early ’90s pop cringeculturecrossfire.comforums.thefashionspot.com. The song’s catchphrase chorus (and its shirtless music video) made it memorable, but also widely mocked in the years since.

1994

  • “Cotton Eye Joe” – Rednex (1994): This techno-hoedown remix of an old folk tune became a staple at sports games and parties – much to many people’s annoyance. With its frenetic fiddle loops and the grating “If it hadn’t been for Cotton Eye Joe” chorus, the track wears thin fast. CBS News put it at #1 on a “Worst Songs from Jock Jams” list, calling it an “unholy hybrid of country-fried fiddle kitsch and relentless hi-NRG drums”en.wikipedia.org. A Toronto Star critic dubbed it the worst song of the ’90s, describing it as “straight drivel”en.wikipedia.org. Even wrestler-turned-rocker Chris Jericho said it’s one of his least favorite songs ever – naming it alongside “Mambo No. 5” on his personal worst listen.wikipedia.org. Despite its party ubiquity, few ever actually wanted an “EDM-hillbilly” mashup like thisen.wikipedia.org.

1997

  • “Barbie Girl” – Aqua (1997): This novelty dance-pop hit was simultaneously a global chart-topper and a critical punching bag. Its bouncy, plastic sound and exaggerated vocals (“Come on Barbie, let’s go party!”) made it a staple of late-’90s radio and kids’ parties, but also an irritant to many. Rolling Stone readers voted it the worst song of the ’90sen.wikipedia.org, and NME put it atop their worst-single list as wellen.wikipedia.org. While Aqua intended it as satire, the song was widely mocked; it even sparked a lawsuit from Mattel for tarnishing the Barbie brand. Decades later, it remains a guilty pleasure earworm that routinely tops “most annoying” song listscraftyourhappyplace.com.

Fast-forward a bit:

2013

  • “The Fox (What Does the Fox Say?)” – Ylvis (2013): An absurdist novelty song by a Norwegian comedy duo, “The Fox” unexpectedly became a global viral hit in late 2013. Its EDM-lite beat and litany of animal sounds (ring-ding-ding, wa-pa-pa-pow!) made it the “Macarena” of the YouTube era – funny at first, then torturous on repeat. Kids loved the silliness, while parents and many others quickly tired of it. The song was essentially one long joke about animal noises, and once the joke wore off, what remained was an insanely repetitive chorus of nonsense. Rolling Stone readers later voted it one of the most annoying songs of the 2010s, and USA Today called it “the year’s most aggravating meme song.” Like other meme hits, it got played at school events and even on radio as a curiosity, amplifying the annoyance for those who never wanted to hear “Hatee-hatee-hatee-ho!” again. Ylvis intended it as a parody of EDM pop – ironically achieving exactly the kind of ubiquitous earworm they were spoofing, to the annoyance of millions.
  • “Baby Shark” (global viral hit – charted 2018/2019): {See 2016 above.} In 2018, “Baby Shark” finally chomped its way onto international music charts (hitting the US Top 40 by early 2019)theguardian.comtheguardian.com. This was the year the song truly went pandemic, crossing from kids’ YouTube into mainstream consciousness. By the fall of 2018, articles were calling it “the year’s most annoying song” outrighttheguardian.com. Parents everywhere pleaded for mercy from the doo-doo-doos. The Guardian wryly noted how even people who hadn’t heard it knew of it through cultural osmosis – often via tales of toddlers playing it on endless repeat, driving their families madtheguardian.com. Once it hit the radio and dance remixes appeared, there was no escape. “Baby Shark” in 2018 exemplified how a simple children’s ditty could unite millions in shared, head-throbbing frustration. (Fun fact: that year a viral social post joked that the Geneva Convention should classify repeated “Baby Shark” play as a form of torture – a sentiment many a worn-out babysitter would second.)globalnews.catheguardian.com

2023

  • “Peaches” – Jack Black (Bowser) (2023): Featured in The Super Mario Bros. Movie, this over-the-top piano ballad sung by Jack Black (in character as Bowser) became a TikTok smash and even entered the Billboard Hot 100. It’s intentionally comedic – Bowser professing love for Princess Peach with a dramatic “Peaches, Peaches, Peaches” chorus. Kids and gamers found it hilarious and catchy; others quickly found it grating (especially if they hadn’t seen the movie context). The chorus is essentially one word (peaches) repeated incessantly with operatic bombast. Within weeks, clips of people annoyed by their friends or children endlessly singing “Peaches” popped up online. Though short, the song’s earworm power is strong. One AV Club article quipped that it’s the kind of novelty song “destined to annoy parents everywhere while their offspring replay it 1,000 times.” It’s a fresh example of how a song can go viral and charm its way into your brain – and then, thanks to sheer overuse, make you never want to hear the word “peach” again (sorry, Georgia).

Some Thoughts on Voice Mode

Let’s move back to the blog post: after delving into the above stuff, Mollick also covers the verbal capabilities of new systems that really change the interface. As he notes, you can talk to these models while you’re in the car, or on a walk, or anywhere that you have the space and time.

I think verbal communication is going to finally make LLM responses “real” to us. And that’s all I have to say about that – for now.

I think all of this is deeply informing to people who want to be practical about LLM use. The deep research function in particular is something people will be learning about as time goes on. How many of us will figure it out before something new comes along that makes it obsolete?

In any case, this was an excellent guide to how to make the most of AI in today’s environment.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Will artificial intelligence fuel moral chaos or positive change?

Published

on


Getty Images

Artificial intelligence is transforming our world at an unprecedented rate, but what does this mean for Christians, morality and human flourishing?

In this episode of “The Inside Story,” Billy Hallowell sits down with The Christian Post’s Brandon Showalter to unpack the promises and perils of AI.

From positives like Bible translation to fears over what’s to come, they explore how believers can apply a biblical worldview to emerging technology, the dangers of becoming “subjects” of machines, and why keeping Christ at the center is the only true safeguard.

Plus, learn about The Christian Post’s upcoming “AI for Humanity” event at Colorado Christian University and how you can join the conversation in person or via livestream:

The Inside Story” takes you behind the headlines of the biggest faith, culture and political headlines of the week. In 15 minutes or less, Christian Post staff writers and editors will help you navigate and understand what’s driving each story, the issues at play — and why it all matters.

Listen to more Christian podcasts today on the Edifi app — and be sure to subscribe to The Inside Story on your favorite platforms:



Source link

Continue Reading

AI Research

BNY and Carnegie Mellon University announce five-year $10 million partnership supporting AI research  — EdTech Innovation Hub

Published

on


The $10 million deal aims to bring students, faculty and staff together alongside BNY experts to advance AI applications and systems to prepare the next generation of leaders.

Known as the BNY AI Lab, the collaboration will focus on technologies and frameworks that can ensure robust governance of mission-critical AI applications.

“As AI drives productivity, unlocks growth and transforms industries, Pittsburgh has cemented its role as a global hub for innovation and talent, reinforcing Pennsylvania’s leadership in shaping the broader AI ecosystem,” comments Robin Vince, CEO at BNY. “Building on BNY’s 150-year legacy in the Commonwealth, we are proud to expand our work with Carnegie Mellon University to help attract world-class talent and pioneer AI research with an impact far beyond the region.”

A dedicated space for the collaboration will be created at the University’s Pittsburgh campus during the 2025-26 academic year.

“AI has emerged as one of the single most important intellectual developments of our time, and it is rapidly expanding into every sector of our economy,” adds Farnam Jahanian, President of Carnegie Mellon. “Carnegie Mellon University is thrilled to collaborate with BNY – a global financial services powerhouse – to responsibly develop and scale emerging AI technologies and democratize their impact for the benefit of industry and society at large.” 

The ETIH Innovation Awards 2026

The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.

Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.

Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.

Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.



Source link

Continue Reading

AI Research

Beyond Refusal — Constructive Safety Alignment for Responsible Language Models

Published

on


View a PDF of the paper titled Oyster-I: Beyond Refusal — Constructive Safety Alignment for Responsible Language Models, by Ranjie Duan and 26 other authors

View PDF

Abstract:Large language models (LLMs) typically deploy safety mechanisms to prevent harmful content generation. Most current approaches focus narrowly on risks posed by malicious actors, often framing risks as adversarial events and relying on defensive refusals. However, in real-world settings, risks also come from non-malicious users seeking help while under psychological distress (e.g., self-harm intentions). In such cases, the model’s response can strongly influence the user’s next actions. Simple refusals may lead them to repeat, escalate, or move to unsafe platforms, creating worse outcomes. We introduce Constructive Safety Alignment (CSA), a human-centric paradigm that protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results. Implemented in Oyster-I (Oy1), CSA combines game-theoretic anticipation of user reactions, fine-grained risk boundary discovery, and interpretable reasoning control, turning safety into a trust-building process. Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities. On our Constructive Benchmark, it shows strong constructive engagement, close to GPT-5, and unmatched robustness on the Strata-Sword jailbreak dataset, nearing GPT-o1 levels. By shifting from refusal-first to guidance-first safety, CSA redefines the model-user relationship, aiming for systems that are not just safe, but meaningfully helpful. We release Oy1, code, and the benchmark to support responsible, user-centered AI.

Submission history

From: Ranjie Duan [view email]
[v1]
Tue, 2 Sep 2025 03:04:27 UTC (5,745 KB)
[v2]
Thu, 4 Sep 2025 11:54:06 UTC (5,745 KB)
[v3]
Mon, 8 Sep 2025 15:18:35 UTC (5,746 KB)
[v4]
Fri, 12 Sep 2025 04:23:22 UTC (5,747 KB)



Source link

Continue Reading

Trending