Tools & Platforms
Former Intel Singapore CTO Joins SuperX AI to Lead Full-Stack Infrastructure Push
With a strong background in enterprise technology, data center engineering, and strategic innovation, Kenny Sng brings over two decades of experience leading high-impact teams across
The rebranding to Super X AI Technology Limited reflects the company’s sharpened focus on the booming AI infrastructure market. As global demand for AI computing power is projected to grow exponentially in the coming years, SuperX is committed to delivering comprehensive, fast-deployment, low-cost, and high-performance infrastructure solutions tailored for institutional clients.
SuperX’s offerings include:
- AI servers optimized for large-scale model training and inference.
- Liquid cooling systems for energy-efficient, high-density deployments.
- High-voltage direct current (HVDC) systems enabling scalable and sustainable power delivery.
Beyond hardware, SuperX provides end-to-end consulting, design, integrated construction, maintenance, and operations services for AI data centers, ensuring clients can scale confidently and efficiently.
With an ongoing pipeline of potential projects and partnerships, SuperX is poised to become a key enabler in the AI revolution in
About Super X AI Technology Limited (NASDAQ: SUPX)
Super X AI Technology Limited is a leading provider of AI infrastructure solutions, offering cutting-edge hardware and services to support the next generation of artificial intelligence applications. SuperX aims to become a leading technology company dedicated to developing and delivering next-generation digital infrastructure solutions. Headquartered in
On June 2, 2025, the Company rebranded from “Junee Limited” (NASDAQ: JUNE) to “Super X AI Technology Limited” and shifted the focus of its principal business towards developing into a one-stop AI infrastructure solution provider. The Company began trading under the new ticker symbol “SUPX” on June 2, 2025.
For more information, please visit our website at: www.superx.sg
Safe Harbor Statement
This press release contains forward-looking statements. In addition, from time to time, we or our representatives may make forward-looking statements orally or in writing. We base these forward-looking statements on our expectations and projections about future events, which we derive from the information currently available to us. You can identify forward-looking statements by those that are not historical in nature, particularly those that use terminology such as “may,” “should,” “expects,” “anticipates,” “contemplates,” “estimates,” “believes,” “plans,” “projected,” “predicts,” “potential,” or “hopes” or the negative of these or similar terms. In evaluating these forward-looking statements, you should consider various factors, including: our ability to change the direction of the Company; our ability to keep pace with new technology and changing market needs; and the competitive environment of our business. These and other factors may cause our actual results to differ materially from any forward-looking statement.
Forward-looking statements are only predictions. The reader is cautioned not to rely on these forward-looking statements. The forward-looking events discussed in this press release and other statements made from time to time by us or our representatives, may not occur, and actual events and results may differ materially and are subject to risks, uncertainties, and assumptions about us. We are not obligated to publicly update or revise any forward-looking statement, whether as a result of uncertainties and assumptions, the forward-looking events discussed in this press release and other statements made from time to time by us or our representatives might not occur.
View original content:https://www.prnewswire.com/news-releases/super-x-ai-technology-limited-appoints-kenny-sng-as-chief-technology-officer-to-drive-next-gen-full-stack-ai-infrastructure-growth-302500111.html
SOURCE Super X AI Technology Limited
Tools & Platforms
AI-generated child sexual abuse videos surging online, watchdog says | Internet
The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.
The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.
In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.
The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.
The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.
“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.
The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.
The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.
IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.
The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.
Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.
“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.
The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.
The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content. People found to have breached the new law will face up to five years in jail.
Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.
Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.
AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.
Tools & Platforms
Top AI performers are burned out and eyeing a better workplace

Here’s data that’s sure to get the attention of HR leaders: The employees delivering your biggest AI-driven productivity gains are twice as likely to quit as everyone else.
A new study by the Upwork Research Institute, the research arm of the remote job platform Upwork, reveals that nearly 9 in 10 top AI performers are burned out and eyeing the exits. Meanwhile, more than two-thirds say they trust the technology more than they do their coworkers — with 64% finding machines to be more polite and empathetic.
AI is “unlocking speed and scale but also reshaping how we collaborate and connect as humans,” said Kelly Monahan, managing director of the Upwork Research Institute. “The productivity paradox we’re seeing may be a natural growing pain of traditional work systems, ones that reward output with AI, but overlook the human relationships behind that work.”
According to the study, based on the perspectives of 2,500 workers globally, the emotional dimension around AI runs deeper than many employers may realize. Nearly half of those surveyed say “please” and “thank you” with every request they submit to AI, while 87% phrase their requests as if speaking to a human—an anthropomorphizing of AI tools indicating that employees are forming more genuine emotional connections with their digital assistants than with their colleagues.
Colin Rocker, a content creator specializing in career development, makes the point that “AI will always be the most agreeable coworker, but we have to also be mindful that it’s a system that, by nature, will agree with and amplify whatever is said to it.”
The study also revealed a disconnect between individual AI adoption and organizational strategy. While employees are racing ahead with AI integration, 62% of high-performing AI users say they don’t understand how their daily AI use aligns with company goals. That misalignment creates a dangerous scenario where the most productive employees feel isolated from the broader organizational mission, even as they’re delivering exceptional results.
The contrast with freelancers is illuminating, meanwhile. Unlike full-time employees, independent contractors appear to thrive alongside AI, with nearly nine in 10 reporting a positive impact on their work. These workers use AI primarily as a learning partner, with 90% saying it helps them acquire new skills faster and 42% crediting it with helping them specialize in a particular niche — suggesting that the problem is not technology itself but, rather, how it’s being integrated into traditional organizational structures.
Ultimately, the survey suggests, the path to sustainable, AI-empowered businesses requires reimagining work as a collaboration between the technology and the people who use it; cultivating flexible and resilient talent ecosystems; and redefining AI strategies around relationships, emerging AI roles and responsible governance.
To lead effectively in the age of AI, Monahan suggests that employers “need to redesign work in ways that support not just efficiency but also well-being, trust and long-term resilience.”
Tools & Platforms
In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself
Over 500 new AI certifications have sprung up in Korea in two years, but few are trusted or even taken
A wave of artificial intelligence certifications has flooded the market in South Korea over the past two years.
But according to government data, most of these tests exist only on paper, and have never been used by a single person.
As of Wednesday, there were 505 privately issued AI-related certifications registered with the Korea Research Institute for Professional Education and Training, a state-funded body under the Prime Minister’s Office.
This is nearly five times the number recorded in 2022, before tools like ChatGPT captured global attention. But more than 90 percent of those certifications had zero test-takers as of late last year, the institute’s own data shows.
Many of the credentials are loosely tied to artificial intelligence in name only. Among recent additions are titles like “AI Brain Fitness Coach,” “AI Art Storybook Author,” and “AI Trainer,” which often have no connection to real AI technology.
Only one of the 505 AI-related certifications — KT’s AICE exam — has received official recognition from the South Korean government. The rest have been registered by individuals, companies, or private organizations, with no independent oversight or quality control.
In 2024, just 36 of these certifications held any kind of exam. Only two had more than 1,000 people apply. Fourteen had a perfect 100 percent pass rate. And 20 were removed from the registry that same year.
For test organizers, the appeal is often financial. One popular certification that attracted around 500 candidates last year charged up to 150,000 won ($110) per person, including test fees and course materials. The content reportedly consisted of basic instructions on how to use existing tools like ChatGPT or Stable Diffusion. Some issuers even promote these credentials as qualifications to teach AI to students or the general public.
The people signing up tend to be those anxious about keeping up in an AI-driven world. A survey released this week by education firm Eduwill found that among 391 South Koreans in their 20s to 50s, 39.1 percent said they planned to earn an AI certificate to prepare for the digital future. Others (27.6 percent) said they were taking online AI courses or learning how to use automation tools like Notion AI.
Industry officials warn that most of these certificates hold little value in the job market. Jeong Sung-hoon, communications manager at Seoul-based AI startup Wrtn, told The Korea Herald that these credentials are often “window dressing” for resumes.
Wrtn ranked second in generative AI app usage among Koreans under 30 this March, according to local mobile analytics firm Wiseapp.
“Most private AI certifications aren’t taken seriously by hiring managers,” Jeong said. “Even for non-technical jobs like communications or marketing, what matters more is whether someone actually understands the AI space. That can’t be faked with a certificate.”
mjh@heraldcorp.com
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children