Connect with us

Tools & Platforms

Apply Now: $100,000 African AI Startup Training Program

Published

on


⇓ More from ICTworks


By Wayan Vota on July 8, 2025

Digital skills and technology solutions are more critical for African economies as they embrace digital transformation. Countries are positioning themselves as major tech hubs as the world goes virtual.

Sign Up Now for More Entrepreneurship Training Programs

Entrepreneurs need to master artificial intelligence and advanced AI solutions available today for business growth and development. AI skills are an important tool for promoting social and economic development, creating new jobs, and driving innovation.

MEST AI Startup Program

MEST AI Startup Program is a bold redesign of Meltwater Entrepreneurial School of Technology’s flagship Training Program. It is built to prepare West Africa’s most promising tech talents to build, launch, and scale world-class AI startups.

West Africa has world-class tech talent, and it’s time AI solutions built on the continent reach users everywhere.

The MEST AI Startup Program is a fully-funded, immersive experience hosted in Accra, Ghana. Over an intensive seven-month training phase, founders receive hands-on instruction, technical mentorship, and business coaching from companies such as OpenAI, Perplexity, and Google.

The top ventures then advance to a four-month incubation period, and startups have an opportunity to pitch for pre-seed investment of up to $100, 000 and join the MEST Portfolio.

Apply Now! Deadline is August 22, 2025

More Funding Opportunities

Do you want to get startup investments for a technology business? Or learn how to win more contracts? Then please sign up now to get our email updates. We are constantly publishing new funding opportunities.

Filed Under: Featured, Funding
More About: , , , ,

Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.



Source link

Tools & Platforms

OpenAI to Launch AI-Powered Jobs Platform — Campus Technology

Published

on


OpenAI to Launch AI-Powered Jobs Platform

OpenAI announced it will launch an AI-powered hiring platform by mid-2026, directly competing with LinkedIn and Indeed in the professional networking and recruitment space. The company announced the initiative alongside an expanded certification program designed to verify AI skills for job seekers.

The OpenAI Jobs Platform will use artificial intelligence algorithms to match candidates with employers based on demonstrated AI competencies rather than traditional resume keywords. The platform targets businesses seeking workers proficient in automation, prompt engineering, and AI implementation across various industries.

OpenAI is collaborating with major employers, including Walmart and Boston Consulting Group, to develop the platform’s functionality. Walmart, the largest private employer in the United States with 1.6 million workers, will initially provide free certification access to all US employees.

The Texas Association of Business plans to use the platform to connect local employers with candidates capable of supporting IT modernization projects, according to OpenAI’s announcement.

The company is expanding its OpenAI Academy, a free learning platform that has reached over two million users, to offer formal AI certifications. The program will cover skills ranging from basic workplace AI applications to advanced prompt engineering techniques.

Training and certification testing will occur within ChatGPT’s Study Mode, allowing candidates to prepare and complete credentials without leaving the application. OpenAI aims to certify 10 million Americans by 2030.

The initiative positions OpenAI against established players in the professional networking market. LinkedIn maintains over one billion members globally, while Indeed processes 27 hires per minute with 615 million registered job seekers.

The platform also competes with LinkedIn Learning’s educational offerings, potentially creating tension with Microsoft, OpenAI’s primary investor, with a reported $13 billion stake. Microsoft has previously identified OpenAI as a competitor in specific business segments despite their partnership.

Labor market data support OpenAI’s focus on AI competencies. Research by Lightcast analyzing over one billion job postings found that positions requiring AI skills offer salaries averaging 28% higher than comparable roles without such requirements. Jobs demanding multiple AI skills command premiums up to 43% above standard compensation levels.

The demand spans industries as companies integrate artificial intelligence into operations for task automation, data analysis, and product development. Employers increasingly seek workers capable of practical AI application rather than advanced technical programming skills.

The platform will allow employers to describe requirements in natural language, with AI systems identifying candidates who demonstrate relevant capabilities through portfolio work and practical experience. This approach differs from traditional keyword-based matching systems used by existing job platforms.

OpenAI’s system aims to surface candidates based on actual project experience and demonstrated competencies rather than resume optimization techniques commonly used on current platforms, the company said.

About the Author



John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].







Source link

Continue Reading

Tools & Platforms

Grand Forks believes in the future of AI and technology, Mayor Bochenski says at AI and Autonomy Summit – Grand Forks Herald

Published

on


GRAND FORKS — Grand Forks believes in the future of artificial intelligence and technology and is working to grow in those areas, Mayor Brandon Bochenski told an audience brought together to discuss AI and autonomous systems in Grand Forks and the state.

“We’re standing on the great work that’s happened before us, and just trying to enhance it and make it better,” Bochenski said. “There’s 10, 20 years of work that goes into Grand Forks being put on the map. I’m just grateful that we’re on the map today.”

His sentiments were echoed by others attending the summit, one of the Innovation, Workforce and Research Conferences put on by IEEE-USA, a technical professional organization. The event was held Wednesday, Sept. 10, at the University of North Dakota Memorial Union. Between discussions on state innovation, education, workforce, networking and investing in AI and autonomy, leaders in the fields of AI and autonomy spoke to its presence at UND, Grand Forks and North Dakota as a whole.

Scott Snyder, vice president for research and economic development at UND, mirrored Bochenski’s statement on the decades of work put into the community. UND has been on the “absolute cutting edge of uncrewed and autonomous technologies and systems for well over two decades,” he said. The university also has multiple private and university partners, as well as partnerships with the Department of Defense, National Aeronautics and Space Administration (NASA), Department of Homeland Security, Federal Aviation Administration and other federal government entities.

“UND is at the center of one of the most vibrant environments for the development and deployment of autonomous systems around the world,” Snyder said.

An example of engagement between UND and the federal government was a discussion between UND President Andrew Armacost and Phillip Smith, the program manager for the Tactical Technology Office at DARPA (Defense Advanced Research Projects Agency).

Smith admitted he doesn’t like the word “autonomy,” as he believes it acts similarly to words like “cyber” and “synergy” as jargon people use but don’t actually understand. Breaking down the subsections of autonomy and informing people is important, he said. When Armacost asked Smith what his definition of autonomy is, he said, “software that people don’t understand.”

“It is just an algorithm that cannot be explained to people until we get to general AI,” Smith said. “Humans actually don’t understand what is happening. … Machines are supposed to be serving humans, and humans don’t even know what they want, so that’s a really hard thing.”

Smith said DARPA is working with GrandSky, testing drones that will be able to find a ship at sea, and then orient and land on it without human connection. GrandSky is an aviation business park west of Grand Forks that specifically focuses on the UAS industry.

“That’s the program that we have out here in North Dakota testing, and it’s been really fun,” he said.

Armacost said each person in the room has the opportunity to engage with DARPA, including industry partners, university partners and others.

“They have a large number of avenues that they use to cooperate with their work on technology development,” he said.

The summit itself was the product of UND leaders interacting with IEEE-USA and having an interest in showcasing what the region is doing. Mark Askelson, vice president for research-national security, said he was at a Sioux Falls, South Dakota, summit with Ryan Adams, the dean of engineering and mines, and spoke with some IEEE-USA staff about possibly holding an event in Grand Forks. Askelson said it’s an opportunity to show what the region is doing to more people who don’t know about it. It also is helping forge new connections.

“Despite the fact that we are nationally recognized, I would argue in some of these areas, there’s still a lot of people that don’t know about us,” he said. “They don’t understand some of the things that we do, so that is a great opportunity to bring those people here so they can see us. And, in my experience, once we can get somebody on the ground to see what we have going on, the light bulb goes on for them. That creates more opportunity for us to work with them and for us to innovate.”





Source link

Continue Reading

Tools & Platforms

From Deepfakes To Chatbots, Terrorists Embrace AI To Spread Propaganda – Eurasia Review

Published

on


Realistic-looking news anchors delivering propaganda, popular television characters singing terrorist battle songs, online chatbots that tailor their responses to a user’s interests — these are all ways terrorist groups are using artificial intelligence (AI) to spread their message and recruit.

As AI technologies have spread across the internet, they have become tools that terrorist groups such as the Islamic State group (IS) and al-Qaida use to reach out to young people in Africa and elsewhere who have grown up with the internet and get their information from social media.

Cloaking terrorist propaganda in authentic-looking content helps get the messages past social media moderators, according to Daniel Siegel, who researches digital propaganda at Columbia University’s School of International and Public Affairs.

“By embedding extremist narratives within content that mimics the tone and style of popular entertainment, these videos navigate past the usual scrutiny applied to such messages, making the ideology more accessible and attractive to a wider audience,” Siegel wrote in an analysis for the Global Network on Extremism and Technology.

Although the content often is designed to be funny, it also exploits viewers’ love for the characters to lure them into consuming more of the content without realizing they’re being indoctrinated, Siegel wrote.

“Deepfakes,” AI-generated messages that look real, are making it nearly impossible to tell fact from fiction, experts say. That undermines faith in legitimate media organizations and government institutions alike, according to researcher Lidia Bernd.

“Imagine a deepfake video depicting a political leader declaring war or a religious figure calling for violence,” Bernd wrote recently in the Georgetown Security Studies Review. “The potential for chaos and violence spurred by such content is enormous.”

Terrorist groups already use AI to create hyper-realistic fake content such as scenes of injured children or fabricated attacks designed to stoke viewers’ emotions.

By hiding terrorists’ actual human propagandists behind deepfake technology, AI undercuts facial recognition tools and hamstrings counterterrorism efforts, analyst Soumya Awasthi wrote recently for the Observer Research Foundation.

At least one group affiliated with al-Qaida has offered workshops on using AI to develop visual propaganda and a how-to guide for using chatbots to radicalize potential recruits. Chatbots and other AI technology also can generate computer code for cyberattacks, plan physical attacks and raise money through cryptocurrency.

Terrorist groups use AI to quickly produce propaganda content using video footage captured by drones on the battlefield. Those fake news videos can mirror the look of legitimate news operations such as Al Jazeera or CNN. AI-generated anchors can be tailored to resemble people from geographic regions or ethnic groups terrorists are targeting for recruitment.

IS uses such AI content as part of its “News Harvest” propaganda broadcasts. AI text-to-speech technology turns written scripts into human-sounding audio.

Counterterrorism experts say governments and social media companies need to do more to detect AI-generated content like that being created by IS and al-Qaida.

For social media companies, that can mean boosting open-source intelligence to keep up with terrorism trends. For AI companies, that can mean working with social media and government authorities to refine methods of detecting and blocking malicious use of their technology.

Terrorists’ use of AI is not without its limits. Some members of Islamist terror groups object to depicting human faces in the AI-generated images, forcing some creators to obscure the faces, lessening the impact of the video.

Terrorists also fear having AI turned against them.

Groups affiliated with al-Qaida have warned their members that security forces could use AI-generated audio to give fake commands to followers or otherwise to sow confusion and disrupt terrorist operations.

According to HumAngle, one such warning went out in Arabic via the Telegram messaging app to members of the Boko Haram splinter group Jama’atu Ansarul Muslimina fi Biladis Sudan in Nigeria.

The message, according to HumAngle, said: “New technologies have made it possible to create voices. Although they are yet to be as sophisticated as natural voices, they are getting better and can be used against Jihadists.”



Source link

Continue Reading

Trending