Connect with us

Tools & Platforms

AI in music: K-hip-hop group Deux to make comeback as Lee Hyun Do uses technology to bring back late Kim Sung Jae’s voice – Deets inside! |

Published

on


TRIGGER WARNING: This article contains mention of death

Deux, at one time, were one of the most influential and popular music groups in the Korean music industry, mainly focusing on hip-hop music. The group quickly garnered a lot of pubic praise and managed their career throughout the years. Now, after years of inactivity, the group seems to be making a comeback, though now with the use of artificial intelligence!

Lee Hyun Do to use AI for late Kim Sung Jae’s voice

As per reports shared by Yonhap News, Deux member Lee Hyun Do is all set and ready to release a new project nearing the end of this year. This will mark the group’s 4th official studio album; however, what makes the release even more surprising is the fact that, reportedly, Hyun Do is planning on using artificial intelligence (AI) technology in order to revive fellow bandmate, the late Kim Sung Jae’s voice.As of right now, the singer is looking for agencies and companies to back him in releasing new songs by the end of this year. It will also mark the 30th anniversary of Kim Sung Jae’s demise as well. According to the reports, Hyun Do wishes to extract the late singer’s voice from the existing songs and combine it with his own in order to create new music. Deux debuted in 1993 and went on to release some of the most popular songs that are still remembered to date. Some of their popular songs include, but are not limited to, ‘In the Summer’, ‘Go!Go!Go!’, ‘Look Back at Me’, and ‘We’.

What happened to Kim Sung Jae

On November 20, 1995, the idol was found deceased in a hotel room. The circumstances surrounding his demise were suspicious, which prompted a police investigation. A romantic interest, who has only been named as ‘Miss A’, was found guilty and sent to prison for life.Years later, in 2019, however, she ended up getting acquitted by the court.

“Get the latest news updates on Times of India, including reviews of the movie Coolie and War 2.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Tech Giants Push Policy Power

Published

on


A group of tech leaders and artificial intelligence companies announced the creation of Leading the Future (LTF), a new organization designed to, in its words, “ensure the United States remains the global leader in AI by advancing a clear, high-level policy agenda at the federal and state levels and serving as the political and policy center of gravity for the AI industry.” The industry is no longer happy to shape policy through think tanks, white papers, and voluntary commitments. It is building a political influence infrastructure.

Who Is Behind LTF

The coalition includes powerful venture capital firms like Andreessen Horowitz, investors such as Ron Conway (one of Silicon Valley’s super angels with early investments in Facebook, Google, Airbnb and Reddit), Joe Lonsdale (Palantir cofounder and an early executive at Clarium Capital, Peter Thiel’s hedge fund), Greg Brockman (OpenAI cofounder and current president) and his wife Anna Brockman. Even though the announcement is short on specific names, it indicates the participation from leading firms, including Perplexity.

Their motivations are clear in their intent to promote policies to advance the economic benefits of the technology and oppose efforts seen as limiting and delaying its development in the US. They frame the stakes in AI as not only commercial but also geopolitical. With Washington and Beijing locked in a struggle over compute power, export controls, and data supply chains, tech leaders want a direct line into state capitals and the halls of Congress.

Earlier lobbying by the internet sector focused on shaping policy through public campaigns, portraying themselves as defenders of the users, internet freedom or innovation. They often leaned on trade associations. Differently, LTF brands itself as an independent political entity. The initiative is a well-funded, centralized advocacy effort positioned to shape the future direction of tech policy in the country. It resembles historical efforts in business, food, tobacco, pharma, and other sectors with well-coordinated lobbying and electioneering to secure favorable outcomes.

Lessons from Web 2.0

This is not the first time Silicon Valley has built influence in Washington. In the late 2000s, as regulators debated privacy, antitrust, and liability protections, internet companies expanded their lobbying spend. Google went from negligible activity in the early 2000s to being among the top corporate lobbyists by the early 2010s. Facebook followed suit, building networks of state and federal lobbyists while fighting attempts to tighten rules on data collection.

Those efforts were defensive, aimed at forestalling oversight that might slow growth. Silicon Valley’s attitude toward Washington during Web 2.0 was generally one of avoidance, with tech leaders’ preference for minimal governance and free-market growth. Most companies neglected formal lobbying until faced with scrutiny, potential regulation or in response to crises. The relationship was characterized by mutual unfamiliarity, with many in DC underestimating the tech sector’s potential impact on policy, and tech companies believing they could bypass government oversight by focusing solely on innovation.

By contrast, LTF presents itself as offensive: it wants to shape an affirmative agenda and frame the policy debate itself.

Regulatory Capture and AI

Economists and legal scholars have long warned about the dangers of industries capturing the agencies tasked with regulating them. George Stigler, in his seminal 1971 essay The Theory of Economic Regulation, argued that “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit”. He introduced the concept of regulatory capture and shifted the understanding of regulation from the public interest model to a rational business choice. One of his insights was that companies often prefer regulatory control over subsidies. Rules that restrict entry, shape market structure, or favor complements can create more lasting advantage than direct government handouts.

Stephen Breyer, writing in Regulation and Its Reform (1984), documented the recurring pattern of regulatory failure in America: high costs, low returns, procedural gridlock, and unpredictability. Cass Sunstein added a twist in his 1990 essay Paradoxes of the Regulatory State: sometimes well-intentioned regulation backfires, producing the opposite of its intended effect.

Silicon Valley Bank’s 2023 collapse, the second largest in U.S. history, resulted from risky management, overinvestment in long-term bonds losing value as rates rose, and a rapid $42 billion bank run. This crisis is an example of how regulatory capture and policy changes, like the post-2018 rollback of Dodd-Frank provisions, can backfire. Regulatory failures included delayed, insufficient oversight due to weakened post-2018 rules, procedural gridlock, and unpredictability.

These perspectives suggest that as AI evolves, the risk is not just over- or under-regulation, but that industry itself will be the architect of the rules. AI offers fertile ground for capture. The technology is complex, opaque, and evolving quickly. Regulators often lack the expertise or resources to challenge the claims of leading labs. This creates an asymmetry: the firms that dominate model training are also those most capable of defining the safety benchmarks, compliance metrics, and standards of responsible AI.

Money in Politics Today

The timing of LTF’s launch is no accident. The Supreme Court’s Citizens United decision in 2010 opened the door to unlimited corporate spending on political speech through Super PACs and 501(c)(4) “social welfare” groups. These entities can raise and spend vast sums, often with limited transparency. Tech leaders are familiar with these vehicles, and crypto companies have used them aggressively in the 2024 election cycle.

By creating LTF as a political hub, the sector signals it intends to play at the same level as defense contractors, pharmaceutical giants, and oil companies. The group can funnel money into congressional races, shape ballot initiatives, and build permanent influence networks. And because AI touches multiple policy domains—national security, labor, education, healthcare—the scope of lobbying is potentially broader than any prior technology sector campaign.

The sums at stake are enormous. Training frontier models requires billions of dollars in chips and energy. Securing government contracts for AI in defense, intelligence, and healthcare could yield recurring revenue streams. In this context, spending hundreds of millions on political influence is rational, and perhaps necessary, for firms seeking to entrench their market position.

Possible Futures for AI Policy

The creation of LTF raises the question: Is AI governance going to follow a pattern of capture, or can policymakers create structures to resist it?

On one path, industry sets the rules. Companies use their clout to define the pathways that align with their business models. They shape federal preemption laws that limit state experimentation. They fund think tanks and university programs that validate their frameworks. This would mirror what Stigler described as the normal course of regulation: industries acquiring and shaping the state’s coercive power for their own benefit.

On another path, policymakers build more resilient institutions. Breyer’s framework suggests starting with clear objectives, examining alternative methods, and choosing the least intrusive regulatory form. Sunstein warns against paradoxes, where well-meaning but rigid rules lead to enforcement paralysis. Applied to AI, this means balancing innovation with safeguards, ensuring that agencies have the expertise to evaluate claims, and creating accountability mechanisms that cannot be dominated by a handful of firms.

Will AI policy become another case study in capture or a demonstration that democratic institutions can adapt to a general-purpose technology? From railroads to telecoms to energy, industries with concentrated wealth and technical expertise have usually succeeded in bending rules to their favor. But AI also raises existential concerns, from misinformation to labor disruption to military use, that broaden the coalition demanding oversight.

The launch of Leading the Future formalizes what had been implicit: AI is not just a technological race but also a contest over policy and influence. The outcome will depend on whether policymakers heed the lessons of Breyer, Stigler, and Sunstein or repeat the familiar cycle of regulation designed by and for the regulated.

Money will play a decisive role, as it always has in American politics. But the stakes in AI are larger than market share.



Source link

Continue Reading

Tools & Platforms

New office to lead AI, tech integration across all campuses

Published

on

By


Reading time: 2 minutes

As Artificial Intelligence (AI) transforms higher education, the University of Hawaiʻi is launching a new systemwide office to meet the challenge and establish itself as a national leader. The UH Office of Academic Technology and Innovation (OATI) will guide the integration of emerging technologies and AI across all 10 campuses, serving as the hub for strategy, implementation and oversight in teaching, learning and operations.

Housed within the Office of the UH President, the office will be overseen by Ina Wanca, the UH Chief Academic Technology Innovation Officer. Wanca will work closely with campus leaders, ITS and the Institutional Research and Analysis Office and serve as the primary liaison between academic leadership and ITS.

OATI will support the consolidation and alignment of academic technology, advance AI adoption and transformative initiatives across the system and establish governance frameworks to ensure the responsible, ethical and equitable use of technology.

“The Office of Academic Technology and Innovation is a critical step forward in ensuring UH is not just adapting to emerging technologies but leading their thoughtful and strategic integration,” said UH President Wendy Hensel. “This office will help us realize the full potential of AI and academic innovation to support student success, faculty excellence, and operational efficiency.”

With AI adoption moving at different paces across UH’s ten campuses, OATI will create a single framework ensuring all investments, tools, and innovations drive a common vision for teaching, learning, and research.

“This new office turns that shared vision into reality,” said Ina Wanca. “By ensuring equal access to modern tools, building AI literacy for students and faculty and linking innovation to workforce readiness, we will prepare Hawaiʻi’s learners and educators to thrive in the AI era while honoring the values that define our university system.”

OATI will also support the AI Planning Group announced June 25 in developing a university-wide AI strategy aligned with institutional goals.

“With the AI Planning Group and OATI working together, we can align priorities across all campuses and move quickly from ideas to implementation,” said Kim Siegenthaler, Senior Advisor to the President.

The office will also help lead implementation of the $7.4 million, five-year subscription to EAB Navigate360 and EAB Edify, approved by the UH Board of Regents on June 16. The platforms use predictive analytics to alert faculty, advisors, and support staff at the earliest sign a student may be at risk. The systems have proven successful in closing student achievement gaps and improving retention and graduation rates.



Source link

Continue Reading

Tools & Platforms

We have let down teens if we ban social media but embrace AI

Published

on


If you are in your 70s, you didn’t fight in the second world war. Such a statement should be uncontroversial, given that even the oldest septuagenarian today was born after the war ended. But there remains a cultural association between this age group and the era of Vera Lynn and the Blitz.

A similar category error exists when we think about parents and technology. Society seems to have agreed that social media and the internet are unknowable mysteries to parents, so the state must step in to protect children from the tech giants, with Australia releasing details of an imminent ban. Yet the parents of today’s teenagers are increasingly millennial digital natives. Somehow, we have decided that people who grew up using MySpace or Habbo Hotel are today unable to navigate how their children use TikTok or Fortnite.

Simple tools to restrict children’s access to the internet already exist, from adjusting router settings to requiring parental permission to install smartphone apps, but the consensus among politicians seems to be that these require a PhD in electrical engineering, leading to blanket illiberal restrictions. If you customised your Facebook page while at university, you should be able to tweak a few settings. So, rather than asking everyone to verify their age and identify themselves online, why can’t we trust parents to, well, parent?


If you customised your Facebook page at university, you should be able to tweak a few settings

Failing to keep up with generational shifts could also result in wider problems. As with the pensioners we’ve bumped from serving in Vietnam to storming Normandy, there is a danger in focusing on the wrong war. While politicians crack down on social media, they rush to embrace AI built on large language models, and yet it is this technology that will have the largest effect on today’s teens, not least as teachers wonder how they will be able to set ChatGPT-proof homework.

Rather than simply banning things, we need to be encouraging open conversations about social media, AI and any future technologies, both across society and within families.

Topics:



Source link

Continue Reading

Trending