Tools & Platforms
How do we unleash lightning in a bottle?

For months, a fierce debate has been unfolding between lawmakers and tech leaders over how — or even whether — to regulate artificial intelligence.
Tensions spiked when U.S. senators stripped a controversial provision from President Donald Trump’s “big, beautiful bill” that would have blocked states from regulating AI for the next decade. The move revealed sharp divides within the tech industry and between both political parties over who should hold the reins on this powerful technology.
J.D. Vance speaks at AI Action Summit in Paris
In February, Vice President J.D. Vance spoke at the Artificial Intelligence Action Summit in Paris, France, in front of business and world leaders, where he outlined the Trump Administration’s regulation plans.
“We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, because deregulating AI,” Vance said. “This doesn’t mean, of course, that all concerns about safety go out the window, but focus matters, and we must focus now on the opportunity.”
AI leaders answer questions at U.S. Senate Commerce Committee
In May, leaders from AI-leading tech companies ChatGPT, CoreWeave, AMD, and Microsoft answered questions from a U.S. Senate Commerce Committee focused on how regulations could impact American AI competitiveness with tech markets in China and the European Union. However, while each company said they leaned toward less regulation when it comes to AI, their thresholds for that regulation differed.
OpenAI CEO Sam Altman said he understands companies need guardrails when developing and deploying AI solutions, but too much regulation could stifle growth.
“I have the great honor to be one of the parents of the many parents of the AI revolution, and I think it is no accident that that’s happening in America again and again and again, but we need to make sure that we build our systems and that we set our policy in a way where that continues to happen,” Altman said. “Of course, there will be rules. Of course, there need to be some guardrails. This is a very impactful technology, but we need to be able to be competitive globally.”
CoreWeave CEO Michael Intrator testified that a patchwork of regulatory overlays will cause friction.
“And the idea that you can make an investment that could then become trapped in a jurisdiction that has a particular type of regulation that would not allow you to make full use of it is really very, very suboptimal and makes the decision-making around infrastructure challenging,” Intrator said.
Microsoft president weighs in on AI debate
At the hearing, Microsoft President Brad Smith outlined a more balanced approach to running the AI race.
“It is a race that no company or country can win by itself,” he said. “To win the AI race, the United States will need to support the private sector at every layer of the AI tech stack. The nation will need to partner with American allies and friends around the world.”
Concerning whether or not the federal government should open the door for U.S. States to deregulate AI, Brad Smith told KIRO Newsradio in a one-on-one interview, “States have long played a critical role in, say, protecting children, protecting consumers, and it would be a mistake, in our view, if federal legislation were to preclude their ability to do that, especially under laws of long standing.”
Many congressional Republicans who supported Trump’s proposed regulation moratorium said it would not only prevent a patchwork of rules and regulations, it would ensure American tech companies could compete with recent Chinese breakthroughs in generative AI, like the MiniMax platform that specializes in transforming text into videos and Deep Seek, a more cost-effective solution than leading American models like OpenAI’s GPT-4. AI has scaled faster than ever in China, thanks to a mix of optimism about technology and flexible regulations ready to ebb and flow to keep pace with the U.S. and European Union.
Congressman Adam Smith calls for regulation
For many Democrats like Congressman Adam Smith, not only does he think states and the federal government need regulations, he told KIRO Newsradio we should also adopt smart regulations worldwide.
“I’m particularly worried about the Trump approach of sort of, basically, America is going to operate on its own and do our own thing,” Adam Smith said. “Well, the rest of the world is going to do their own thing, and then chaos is the likely result in a whole bunch of different areas.”
Vance: ‘We must focus now on the opportunity to catch lightning in a bottle’
So, what is the right answer for regulating AI?
“Focus matters, and we must focus now on the opportunity to catch lightning in a bottle, unleash our most brilliant innovators, and use AI to improve the well-being of our nations and their peoples,” Vice President Vance said in Paris.
Using that analogy, strict regulation could be like keeping lightning locked tight inside that bottle. Less regulation could mean letting a little bit of that lightning at a time, but not enough to burn down the house. And complete deregulation could be like letting that lightning loose and just hoping it doesn’t torch everything around us.
For Adam Smith, he said the debate won’t be over anytime soon.
“I’m sure that what we saw the last few weeks was the opening chapter of what will probably become a book of debate and law, and regulation, and there’s a number of chapters still to be written,” Adam Smith said.
Follow Luke Duecy on X. Read more of his stories here. Submit news tips here.
Tools & Platforms
OpenAI to Launch AI-Powered Jobs Platform — Campus Technology
OpenAI to Launch AI-Powered Jobs Platform
OpenAI announced it will launch an AI-powered hiring platform by mid-2026, directly competing with LinkedIn and Indeed in the professional networking and recruitment space. The company announced the initiative alongside an expanded certification program designed to verify AI skills for job seekers.
The OpenAI Jobs Platform will use artificial intelligence algorithms to match candidates with employers based on demonstrated AI competencies rather than traditional resume keywords. The platform targets businesses seeking workers proficient in automation, prompt engineering, and AI implementation across various industries.
OpenAI is collaborating with major employers, including Walmart and Boston Consulting Group, to develop the platform’s functionality. Walmart, the largest private employer in the United States with 1.6 million workers, will initially provide free certification access to all US employees.
The Texas Association of Business plans to use the platform to connect local employers with candidates capable of supporting IT modernization projects, according to OpenAI’s announcement.
The company is expanding its OpenAI Academy, a free learning platform that has reached over two million users, to offer formal AI certifications. The program will cover skills ranging from basic workplace AI applications to advanced prompt engineering techniques.
Training and certification testing will occur within ChatGPT’s Study Mode, allowing candidates to prepare and complete credentials without leaving the application. OpenAI aims to certify 10 million Americans by 2030.
The initiative positions OpenAI against established players in the professional networking market. LinkedIn maintains over one billion members globally, while Indeed processes 27 hires per minute with 615 million registered job seekers.
The platform also competes with LinkedIn Learning’s educational offerings, potentially creating tension with Microsoft, OpenAI’s primary investor, with a reported $13 billion stake. Microsoft has previously identified OpenAI as a competitor in specific business segments despite their partnership.
Labor market data support OpenAI’s focus on AI competencies. Research by Lightcast analyzing over one billion job postings found that positions requiring AI skills offer salaries averaging 28% higher than comparable roles without such requirements. Jobs demanding multiple AI skills command premiums up to 43% above standard compensation levels.
The demand spans industries as companies integrate artificial intelligence into operations for task automation, data analysis, and product development. Employers increasingly seek workers capable of practical AI application rather than advanced technical programming skills.
The platform will allow employers to describe requirements in natural language, with AI systems identifying candidates who demonstrate relevant capabilities through portfolio work and practical experience. This approach differs from traditional keyword-based matching systems used by existing job platforms.
OpenAI’s system aims to surface candidates based on actual project experience and demonstrated competencies rather than resume optimization techniques commonly used on current platforms, the company said.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].
Tools & Platforms
Grand Forks believes in the future of AI and technology, Mayor Bochenski says at AI and Autonomy Summit – Grand Forks Herald

GRAND FORKS — Grand Forks believes in the future of artificial intelligence and technology and is working to grow in those areas, Mayor Brandon Bochenski told an audience brought together to discuss AI and autonomous systems in Grand Forks and the state.
“We’re standing on the great work that’s happened before us, and just trying to enhance it and make it better,” Bochenski said. “There’s 10, 20 years of work that goes into Grand Forks being put on the map. I’m just grateful that we’re on the map today.”
His sentiments were echoed by others attending the summit, one of the Innovation, Workforce and Research Conferences put on by IEEE-USA, a technical professional organization. The event was held Wednesday, Sept. 10, at the University of North Dakota Memorial Union. Between discussions on state innovation, education, workforce, networking and investing in AI and autonomy, leaders in the fields of AI and autonomy spoke to its presence at UND, Grand Forks and North Dakota as a whole.
Scott Snyder, vice president for research and economic development at UND, mirrored Bochenski’s statement on the decades of work put into the community. UND has been on the “absolute cutting edge of uncrewed and autonomous technologies and systems for well over two decades,” he said. The university also has multiple private and university partners, as well as partnerships with the Department of Defense, National Aeronautics and Space Administration (NASA), Department of Homeland Security, Federal Aviation Administration and other federal government entities.
“UND is at the center of one of the most vibrant environments for the development and deployment of autonomous systems around the world,” Snyder said.
An example of engagement between UND and the federal government was a discussion between UND President Andrew Armacost and Phillip Smith, the program manager for the Tactical Technology Office at DARPA (Defense Advanced Research Projects Agency).
Smith admitted he doesn’t like the word “autonomy,” as he believes it acts similarly to words like “cyber” and “synergy” as jargon people use but don’t actually understand. Breaking down the subsections of autonomy and informing people is important, he said. When Armacost asked Smith what his definition of autonomy is, he said, “software that people don’t understand.”
“It is just an algorithm that cannot be explained to people until we get to general AI,” Smith said. “Humans actually don’t understand what is happening. … Machines are supposed to be serving humans, and humans don’t even know what they want, so that’s a really hard thing.”
Smith said DARPA is working with GrandSky, testing drones that will be able to find a ship at sea, and then orient and land on it without human connection. GrandSky is an aviation business park west of Grand Forks that specifically focuses on the UAS industry.
“That’s the program that we have out here in North Dakota testing, and it’s been really fun,” he said.
Armacost said each person in the room has the opportunity to engage with DARPA, including industry partners, university partners and others.
“They have a large number of avenues that they use to cooperate with their work on technology development,” he said.
The summit itself was the product of UND leaders interacting with IEEE-USA and having an interest in showcasing what the region is doing. Mark Askelson, vice president for research-national security, said he was at a Sioux Falls, South Dakota, summit with Ryan Adams, the dean of engineering and mines, and spoke with some IEEE-USA staff about possibly holding an event in Grand Forks. Askelson said it’s an opportunity to show what the region is doing to more people who don’t know about it. It also is helping forge new connections.
“Despite the fact that we are nationally recognized, I would argue in some of these areas, there’s still a lot of people that don’t know about us,” he said. “They don’t understand some of the things that we do, so that is a great opportunity to bring those people here so they can see us. And, in my experience, once we can get somebody on the ground to see what we have going on, the light bulb goes on for them. That creates more opportunity for us to work with them and for us to innovate.”
Tools & Platforms
From Deepfakes To Chatbots, Terrorists Embrace AI To Spread Propaganda – Eurasia Review

Realistic-looking news anchors delivering propaganda, popular television characters singing terrorist battle songs, online chatbots that tailor their responses to a user’s interests — these are all ways terrorist groups are using artificial intelligence (AI) to spread their message and recruit.
As AI technologies have spread across the internet, they have become tools that terrorist groups such as the Islamic State group (IS) and al-Qaida use to reach out to young people in Africa and elsewhere who have grown up with the internet and get their information from social media.
Cloaking terrorist propaganda in authentic-looking content helps get the messages past social media moderators, according to Daniel Siegel, who researches digital propaganda at Columbia University’s School of International and Public Affairs.
“By embedding extremist narratives within content that mimics the tone and style of popular entertainment, these videos navigate past the usual scrutiny applied to such messages, making the ideology more accessible and attractive to a wider audience,” Siegel wrote in an analysis for the Global Network on Extremism and Technology.
Although the content often is designed to be funny, it also exploits viewers’ love for the characters to lure them into consuming more of the content without realizing they’re being indoctrinated, Siegel wrote.
“Deepfakes,” AI-generated messages that look real, are making it nearly impossible to tell fact from fiction, experts say. That undermines faith in legitimate media organizations and government institutions alike, according to researcher Lidia Bernd.
“Imagine a deepfake video depicting a political leader declaring war or a religious figure calling for violence,” Bernd wrote recently in the Georgetown Security Studies Review. “The potential for chaos and violence spurred by such content is enormous.”
Terrorist groups already use AI to create hyper-realistic fake content such as scenes of injured children or fabricated attacks designed to stoke viewers’ emotions.
By hiding terrorists’ actual human propagandists behind deepfake technology, AI undercuts facial recognition tools and hamstrings counterterrorism efforts, analyst Soumya Awasthi wrote recently for the Observer Research Foundation.
At least one group affiliated with al-Qaida has offered workshops on using AI to develop visual propaganda and a how-to guide for using chatbots to radicalize potential recruits. Chatbots and other AI technology also can generate computer code for cyberattacks, plan physical attacks and raise money through cryptocurrency.
Terrorist groups use AI to quickly produce propaganda content using video footage captured by drones on the battlefield. Those fake news videos can mirror the look of legitimate news operations such as Al Jazeera or CNN. AI-generated anchors can be tailored to resemble people from geographic regions or ethnic groups terrorists are targeting for recruitment.
IS uses such AI content as part of its “News Harvest” propaganda broadcasts. AI text-to-speech technology turns written scripts into human-sounding audio.
Counterterrorism experts say governments and social media companies need to do more to detect AI-generated content like that being created by IS and al-Qaida.
For social media companies, that can mean boosting open-source intelligence to keep up with terrorism trends. For AI companies, that can mean working with social media and government authorities to refine methods of detecting and blocking malicious use of their technology.
Terrorists’ use of AI is not without its limits. Some members of Islamist terror groups object to depicting human faces in the AI-generated images, forcing some creators to obscure the faces, lessening the impact of the video.
Terrorists also fear having AI turned against them.
Groups affiliated with al-Qaida have warned their members that security forces could use AI-generated audio to give fake commands to followers or otherwise to sow confusion and disrupt terrorist operations.
According to HumAngle, one such warning went out in Arabic via the Telegram messaging app to members of the Boko Haram splinter group Jama’atu Ansarul Muslimina fi Biladis Sudan in Nigeria.
The message, according to HumAngle, said: “New technologies have made it possible to create voices. Although they are yet to be as sophisticated as natural voices, they are getting better and can be used against Jihadists.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi