Connect with us

AI Insights

Exclusive: Ex-Google DeepMinders’ algorithm-making AI company gets $5 million in seed funding

Published

on


Two former Google DeepMind researchers who worked on the company’s Nobel Prize-winning AlphaFold protein structure prediction AI as well as its AlphaEvolve code generation system have launched a new company, with the mission of democratizing access to advanced algorithms.

The company, which is called Hiverge, emerged from stealth today with $5 million in seed funding, led by Flying Fish Ventures with participation from Ahren Innovation Capital and Alpha Intelligence Capital. Legendary coder and Google chief scientist Jeff Dean is also an investor in the startup.

The company has built a platform it calls “Hive” that uses AI to generate and test novel algorithms to run vital business processes—everything from product recommendations to delivery routing— automatically optimizing them. While large companies that can afford to employ their own data science and machine learning teams do sometimes develop bespoke algorithms, this capability has been out of the reach of most medium and small businesses. Smaller firms have often had to rely on off-the-shelf software that comes with pre-built algorithms that may not be ideally suited for that particular business and its data.

The Hive system also promises the potential to discover unusual algorithms that may produce superior results that human data scientists might never be able to develop through intuition or trial-and-error, Alhussein Fawzi, the company’s cofounder and CEO told Fortune. “The idea behind Hiverge is really to empower those companies with the best, best-in-class algorithms,” he said.

“You can apply [the Hive] to machine learning algorithms, and then you can apply it to planning algorithms,” Fawzi explained. “These are the two things that are, in terms of algorithms, quite different, yet it actually improves on both of them.”

At Google DeepMind, Fawzi had led the team that in 2022 developed its AlphaTensor AI, which discovered new ways to do matrix multiplication, a fundamental mathematical process for training and running neural networks and many other computer applications. The following year, Fawzi and the team developed FunSearch, a method that used large language models to generate new coding approaches and then used an automated evaluator to weed out erroneous solutions.

He also worked on the early stages of what became Google DeepMind’s AlphaEvolve system, which uses several LLMs working together as agents to create entire new code bases for solving complex problems. Google has credited AlphaEvolve with finding ways to optimize its LLMs. For instance, it found a way to improve on the way Gemini does matrix multiplication to deliver a 23% speed-up; it also optimized another key step in the way Transformers, the kind of AI architecture on which LLMs are based, work, boosting speeds by 32%.

Cofounding Hiverge with him is his brother Hamza Fawzi, a professor of applied mathematics at the University of Cambridge, who is serving as a technical advisor to the company; and Bernardino Romera-Paredes, who was part of the Google DeepMind team that created AlphaFold and who is now Hiverge’s chief technology officer.

Hiverge has already demonstrated the utility of its Hive system by using it to win the Airbus Beluga Challenge, which calls on contestants to find the most optimal way of loading and storage of aircraft parts that are carried by an Airbus Beluga XL aircraft. The solution developed by Hiverge delivered a 10,000-times speed-up over the existing aircraft-loading algorithm. The company also showed that it could take a machine learning training algorithm that was already optimized and speed it up by another three times. And it has found novel ways to improve computer vision algorithms.

Alhussein Fawzi said that Hiverge, based in Cambridge, England, currently has six employees but that it would use the money raised in its latest funding round to expand its team. “We will also transition from research to building out our product,” he said. 

The company plans to make its technology accessible through cloud marketplaces like AWS and Google Cloud, where customers can directly use the system on their own code. The platform analyzes which parts of code represent bottlenecks, generates improved algorithms, and provides recommendations to engineers.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

AI Insights

The hidden human cost of Artificial Intelligence

Published

on


A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and ML models.
| Photo Credit: iStockphoto

The world is gearing towards an ‘automated economy’ where machines relying on artificial intelligence (AI) systems produce quick, efficient and nearly error-free outputs. However, AI is not getting smarter on its own; it has been built on and continues to rely on human labour and energy resources. These systems are fed information and trained by workers who are invisibilised by large tech companies, and mainly located in developing countries.

Areas of human involvement

A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and Machine Learning (ML) models. For example, an large-language models (LLM) cannot recognise the colour ‘yellow’ unless the data has been labelled as such. Similarly, self-driving cars rely on information from video footage that has been labelled to distinguish between a traffic sign and humans on the road. The higher the quality of the dataset, the better the output and the more human labour is involved in creating it.

Data annotators play a major role in training LLMs like ChatGPT, Gemini, etc. An LLM is trained in three steps: self-supervised learning, supervised learning and reinforcement learning. In the first step, the machine picks up information from large datasets on the Internet. The data labellers or annotators enter in the second and third steps, where this information is fine-tuned for the LLM to give the most accurate response. Humans give feedback on the output the AI produces for better responses to be generated over time, as well as remove errors and jailbreaks.

This meticulous annotating work is outsourced by tech companies in Silicon Valley to mainly workers in countries like Kenya, India, Pakistan, China and the Philippines for low wages and long working hours.

Data labelling can be of two types: those which do not require subject expertise and those which are more niche and require subject expertise. Several tech companies have been accused of employing non-experts for technical subjects that require prior knowledge. This is a contributing factor in the errors found in the output produced by AI. A data labeller from Kenya revealed that they were tasked with labelling medical scans for an AI system intended for use in healthcare services elsewhere, despite lacking relevant expertise.

However, due to errors resulting from this, companies are starting to ensure experts for such information being fed into the system.

Automated features requiring humans

Even features marketed as ‘fully automated’ are often underpinned by invisible human work. For example, our social media feeds are ‘automatically’ filtered to censor sensitive and graphic content. This is only possible because human moderators labelled such content as harmful by going through thousands of uncensored images, texts and audio. The exposure to such content daily has also been reported to cause severe mental health issues like post-traumatic stress disorder, anxiety and depression in the workers.

Similarly, there are voice actors and actors behind AI-generated audios and videos. Actors may be required to film themselves dancing or singing for these machines to recognise human movements and sounds. Children have also been reportedly engaged to perform such tasks.

In 2024, AI tech workers from Kenya sent a letter to former U.S. President Joe Biden talking about the poor working conditions they are subjected to. “In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern-day slavery,” the letter read. They said the content they have to annotate can range from pornography and beheadings to bestiality for more than eight hours a day, and for less than $2 an hour, which is very low in comparison to industry standards. There are also strict deadlines to complete a task within a few seconds or minutes.

When workers raised their concerns to the companies, they were sacked and their unions dismantled.

Most AI tech workers are unaware of the large tech company they are working for and are engaged in online gig work. This is because, to minimise costs, AI companies outsource the work through intermediary digital platforms. There are subcontract workers in these digital platforms who are paid per “microtask” they perform. They are constantly surveilled, and if they fall short of the targeted output, they are fired. Hence, the labour network becomes fragmented and lacking transparency.

The advancement of AI is powered by such “ghost workers.” The lack of recognition and informalisation of their work helps tech companies to perpetuate this system of labour exploitation. There is a need to bring in stricter laws and regulations on AI companies and digital platforms, not just on their content in the digital space, but also on their labour supply chains powering AI, ensuring transparency, fair pay, and dignity at work.



Source link

Continue Reading

AI Insights

Dubuque County grapples with AI misuse as students face court for fake nude images

Published

on


Three Cascade High School students are now facing charges for allegedly creating fake nude images of other students using Artificial Intelligence. These students are accused of using headshots of the victims and attaching them to images of nude bodies.

Dubuque’s Assistant County Attorney says the fast pace of technological advancements makes it hard to regulate these tools.

“We have a large number of victims that are involved in this case,” Joshua Vander Ploeg, Dubuque’s Assistant County Attorney, said. “And then we can go back to them, which allows us to get to the underlying charges.”

The charges these students are facing are in juvenile court because they are minors. In a statement shared with Iowa’s News Now, Western Dubuque Community Schools said they prioritize the wellbeing and safety of their students. and because of that they said, “any student who has been charged as a creator or distributor of materials like those in question will not be permitted to attend school in person at Cascade Junior/Senior High School.”

There are multiple uses for AI, including photo editing. Vander Ploeg says due to the multifaceted abilities of this tool, there are cases out there with similar issues.

“Some of the language in the Iowa code that talks specifically about AI generated images that are being sent out to other people didn’t go into effect until July 1 of 2024. So we were less than a year out from that when this came on us,” he said. “So it is something that’s rampant and is out there.”

Vander Ploeg says these new advancements with AI are being developed faster than they are being regulated, which can put them at a disadvantage.

“We’re always playing catch up when it comes to those legislative matters. So, you know, if more than anything, I would encourage people that if they have concerns that things that they’re seeing, that are happening to their kids, or are happening to other adults, contact your legislators. Give them ideas of what you think needs to be done to help keep people safe,” Vander Ploeg said.

When it comes to kids, the Assistant Attorney says it important to monitor what they are putting out on the internet.

“If your kid isn’t wanting you to see those areas there’s probably a reason that they don’t want you to see those areas. but that the only way to truly keep them safe as far as what’s on their phone is to monitor it and kids aren’t going to like that,” he said.

And from their end, Vander Ploeg says they are going out into the community and trying to educate the public about what to look for in AI.

“We’re trying to go out and do some education to identify these issues, the dangers that exist out there and what the consequences could be because that’s very important for kids for the future,” Vander Ploeg said.

There may be more charges connected to the AI images. The Dubuque County Attorney’s office says they expect to charge a fourth person, who is also a minor, in relation to this case.



Source link

Continue Reading

AI Insights

Opinion | Why Hong Kong should seek to co-host China’s global AI centre

Published

on


Hong Kong is emerging as a possible contender to host China’s proposed World Artificial Intelligence Cooperation Organisation, potentially challenging Beijing’s early preference for Shanghai. We believe the choice of Hong Kong, with its evolving role in the international technological arena, could reflect a nuanced strategy on Beijing’s part to navigate escalating US-China tech tensions.

The initiative was first proposed by Chinese Premier Li Qiang in July. Hosting such a centre carries both symbolic and strategic weight: it will position the host city at the heart of China’s AI diplomacy and offer a tangible avenue to influence the shaping of global AI standards.
Shanghai is the front runner. The city boasts more than 1,100 core AI companies and 100,000 AI professionals, alongside robust government backing. Its 1 billion yuan (US$139 million) AI development fund and innovation hubs such as the Zhangjiang AI Island – which hosts Alibaba Group Holding (owner of the South China Morning Post), among other tech companies – reinforce its credentials.

President Xi Jinping has explicitly called for Shanghai to lead China’s AI development and governance efforts, providing a political capital that few other cities can match.

In comparison, a city like Singapore presents a credible alternative as a potential centre for a global AI governance group. The city state has a comprehensive AI regulatory framework and initiatives such as AI Verify, which is backed by global tech giants including Google, IBM and Microsoft. Singapore’s proven governance expertise makes it a city Western partners can trust.
Hong Kong, however, presents a distinctive proposition. The “one country, two systems” framework allows it to straddle Chinese interests while retaining a degree of international credibility – a combination that could be invaluable in assuaging Western scepticism towards a global AI centre.



Source link

Continue Reading

Trending