Connect with us

AI Insights

Gemini can now turn your photos into video with Veo 3

Published

on



Google’s Veo 3 videos have propagated across the Internet since the model’s debut in May, blurring the line between truth and fiction. Now, it’s getting even easier to create these AI videos. The Gemini app is gaining photo-to-video generation, allowing you to upload a photo and turn it into a video. You don’t have to pay anything extra for these Veo 3 videos, but the feature is only available to subscribers of Google’s Pro and Ultra AI plans.

When Veo 3 launched, it could conjure up a video based only on your description, complete with speech, music, and background audio. This has made Google’s new AI videos staggeringly realistic—it’s actually getting hard to identify AI videos at a glance. Using a reference photo makes it easier to get the look you want without tediously describing every aspect. This was an option in Google’s Flow AI tool for filmmakers, but now it’s in the Gemini app and web interface.

To create a video from a photo, you have to select “Video” from the Gemini toolbar. Once this feature is available, you can then add your image and prompt, including audio and dialogue. Generating the video takes several minutes—this process takes a lot of computation, which is why video output is still quite limited.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

These fields could see job cuts because of artificial intelligence, federal data says

Published

on


Artificial intelligence has some excited and others scared, as the rapidly evolving technology impacts the job market.

Lucas Shriver is working hard at LEMA in St. Paul. A solar-powered battery station can now be used as a power source in a desert. It’s a project and a job that’s been a long time coming.

“I think I was about 7 years old when I built a tree house by myself,” Shriver said. 

He earned his engineering degree from the University of St. Thomas in June. As a full-time employee, he is one of the lucky ones.

“In my own searching for jobs and my friends, the job market right now is quite difficult, and it does seem like people are looking for someone with five years of experience,” Shriver said.

His professor, John Abraham, agrees.

“The jobs at the bottom rung of a ladder for people to climb up to a corporation. Those are going away in the last two years,” Abraham said. “There’s 35% fewer entry-level, you’re a recent college graduate and you’re looking for a job, you’re up a creek, you’re up a creek.”

Federal data suggests three fields that will feel potential cuts because of AI: Insurance adjusting, credit analysis and paralegals. The data also suggests growth could come in the software, personal finance and engineering fields. 

For job seekers of any age or field, Abraham suggests learning how to use artificial intelligence.

“This is a tool that increases effectiveness so much, you just have to know it if you’re going to compete,” he said.

And Shriver has the job to prove it.

“I have no idea where this is going, but as for today, I am gonna use AI,” he said.

Abraham says jobs with empathy, like counseling and health care may be safer from AI; he also says the trades will likely still be in demand.



Source link

Continue Reading

AI Insights

Good governance holds the key to successful AI innovation

Published

on


Organizations often balk at governance as an obstacle to innovation. But in the fast-moving world of artificial intelligence (AI), a proper governance strategy is crucial to driving momentum, including building trust in the technology and delivering use cases at scale.

Building trust in AI, in particular, is a major hurdle for AI adoption and successful business outcomes. Employees are concerned about AI’s impact on their job, and the risk management team worries about safe and accurate use of AI. At the same time, customers are hesitant about how their personal data is being leveraged. Robust governance strategies help address these trust issues while laying the groundwork for standardized processes and frameworks that support AI use at scale. Governance is also essential to compliance — an imperative for companies in highly regulated industries such as financial services and healthcare.

“Done right, governance isn’t putting on the brakes as it’s often preconceived,” says Camilla Austerberry, director at KPMG and co-lead of the Trusted AI capability, which helps organizations accelerate AI adoption and safe scaling through the implementation of effective governance and controls across the AI life cycle. “Governance can actually be a launchpad, clearing the path for faster, safer, and more scalable innovation.”

Best practices for robust AI governance

Despite its role as a crucial AI enabler, most enterprises struggle with governance, in part because of the fast-moving technology and regulatory climate as well as an out-of-sync organizational culture. According to Foundry’s AI Priorities Study 2025, governance, along with IT integration and security, ranks among the top hurdles for AI implementations, cited by 47% of the responding organizations.

To be strategic about AI governance, experts recommend the following:

Focus on the basics. Because AI technologies and regulations are evolving so quickly, many organizations are overwhelmed by how to build a formal governance strategy. It’s important to create consensus on how AI strategy aligns with business strategy while establishing the proper structure and ownership of AI governance. “My advice is to be proportionate,” Austerberry says. “As the use of AI evolves, so will your governance, but you have to start somewhere. You don’t have to have it all baked in from the start.”

Include employees in the process. It’s important to give people easy access to the technology and encourage widespread use and experimentation. Companywide initiatives that gamify AI encourage adoption and promote feedback for AI governance frameworks. Establishing ambassador or champion programs is another way to engage employees by way of trusted peers, and an AI center of excellence can play a role in developing a foundational understanding of AI’s potential as well as the risks.

“Programs that are successful within organizations go that extra mile of human touch,” says Steven Tiell, global head of AI Governance Advisory at SAS Institute. “The more stakeholders you include in that conversation early, the better.”

Emphasize governance’s relationship to compliance. Effective governance means less friction, especially when it comes to regulators and risk auditors slowing down AI implementation. Given the varied global regulatory climate, organizations should take a forward stance and think beyond compliance to establish governance with lasting legs. “You don’t want to have to change business strategy or markets when a government changes regulations or adds new ones,” says Tiell. “You want to be prepared for whatever comes your way.”

To learn more, watch this webinar.



Source link

Continue Reading

AI Insights

Transparency is key as AI gets smarter, experts say

Published

on


To gain the U.S. government’s trust, advanced AI systems must be engineered from the outset with reliable components offering explainability and transparency, senior federal and industry officials said Friday.

“This [topic] is something I think about a lot,” the CIA’s chief AI officer Lakshmi Raman noted at the Billington Cybersecurity Summit. “And in our [community], it’s about how artificial intelligence can assist and be an intelligence amplifier with the human during the process, keeping their eyes on everything that’s happening and ensuring that, at the end, they’re able to help.”

During a panel, Raman and other current and former government officials underscored the importance of guardrails and oversight — particularly as the U.S. military and IC adopt the technology for an ever-increasing range of operations, and experts predict major breakthroughs will emerge in certain areas within the next few years.

“Trust is such a critical dimension for intelligence,” said Sean Batir, a National Geospatial-Intelligence Agency alum and AWS principal tech lead for frontier AI, quantum and robotics.

Frontier AI refers to next-generation systems, also dubbed foundation models, that are considered among the most powerful and complex technologies currently in development. These likely disruptive capabilities hold potential to unlock discoveries that could be immensely helpful or catastrophically harmful to humanity. 

Departments across the government have been expanding their use of AI and machine learning over the past several years, but defense and national security agencies were some of the earliest adopters. Recently, in July, questions started swirling after the Pentagon’s Chief Digital and AI Office (CDAO) revealed new, separate deals with xAI, Google, Anthropic and OpenAI to accelerate the enterprise- and military-wide deployment of frontier AI. 

“ChatGPT, our flagship product, has upwards of 800 million users every day. So one-tenth of the world is using ChatGPT in various forms,” said Joseph Larson, vice president and head of government business at OpenAI. “At an individual level, AI is there. The more challenging question [for] my job is, with government [use], what does AI adoption look like at an institutional level?”

Larson previously served from 2022 to 2024 as the Pentagon’s first-ever deputy chief digital and AI officer for algorithmic warfare. 

“When we talk about institutions, what does that require above and beyond just access to the technology and the foundation models? It requires, really, a partnership. And that partnership extends to questions around infrastructure, around data, and I think, key, around security,” he said. “And what are the security implications for AI as it moves from just something that you communicate with, that informs maybe a workflow, to something that’s part of an agentic system that’s actually operating in your environment and that has its own controls and authorities? So, those institutional challenges are really the ones that are driving our work within the government today.”

Both OpenAI and Anthropic have reportedly disclosed recent efforts to implement new guardrails because their models appear to be approaching high-risk levels for potentially helping produce certain weapons.

On the panel, Anthropic Chief Information Security Officer Jason Clinton noted that “trust is something that is built up over time.”

“In order to do that, there is a human — and there is a supervisory role — for these models. The one thing that those models will never be able to do is to bring humanity to the equation, right? We will. We will always need to bring our perspective, our values, our institutional wisdom, to what we’re asking the models to be doing,” Clinton said.

He and the other panelists spotlighted multiple risks and threats posed by emerging frontier AI applications. For instance, prompt injections are a type of cyberattack that happen when malicious users craft inputs to an AI system to trick the model into performing unintended or dangerous actions, such as revealing sensitive data or generating unsafe material.

“I’m very optimistic that we will solve some of the more fundamental guardrail problems — like prompt injection — within three-ish years, I guess,” Clinton said. “And the models will be getting smarter, so I suspect the ways that we interact with them will evolve towards more like having a virtual coworker beside you, who you interact with and who learns and adapts … and sort of grows with you in your environment.”

The panelists also discussed the potential power of cutting-edge AI to help reduce vulnerabilities in software by automatically finding and fixing bugs in code and zero-day exploits.

“DARPA just ran a competition at DefCon [hacking conference] that demonstrated the future possibilities there,” said Dr. Kathleen Fisher, director of that agency’s Information Innovation Office.

For the event, officials pulled 54 million lines of code across 20 different repositories that were recommended by critical infrastructure operators who use them to do their daily business. 

“The teams that ran the competition planted 70 systemic vulnerabilities that were patterned after real vulnerabilities that people have struggled with. The teams found 54 of those systemic vulnerabilities, and they patched 43 of them. More importantly, or at the same time, they found 18 zero-days, and they patched 11 of those. It took about 45 minutes to find and fix vulnerability at a cost of $152. Think about what that might mean in the future — like this is the worst that technology is ever going to be,” Fisher said. “Think about what that might mean in the context of things like Volt Typhoon, and Salt Typhoon, and ransomware that is currently plaguing our hospitals. When a hospital gets affected by a ransomware attack — when it shuts down for any period of time — that puts people’s lives at risk.”

Building on that, Microsoft Federal Chief Technology Officer Jason Payne added: “This is the worst version of the technology we will ever use. I also feel like we have the lowest amount of trust in technology, right? And I think if we all use it more, if we experience it more, we’ll sort of understand what it is and what it’s capable of.”

He continued: “Security, governance and explainability are key themes that we’re looking for to kind of build that trust. And at the end of the day, I think government agencies are looking for organizations that are transparent with their AI systems.”


Written by Brandi Vincent

Brandi Vincent is DefenseScoop’s Pentagon correspondent. She reports on disruptive technologies and associated policies impacting Defense Department and military personnel. Prior to joining SNG, she produced a documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland. She was named Best New Journalist at the 2024 Defence Media Awards.



Source link

Continue Reading

Trending