Ning Hu
Courtesy
Thirty years after “Clueless” imaged a digital closet, artificial intelligence is ready to deliver that vision, this time for the masses.
AI agents — autonomous systems that perform tasks for users — are still nascent in fashion, but the technology is fast gaining traction and could redefine how consumers shop fashion online.
One of the newest entrants is Gensmo, a start-up that layers best-in-class AI models with proprietary technology to deliver shoppable outfit suggestions in seconds.
Founded last December by Chinese tech entrepreneur Ning Hu, an Alibaba and Google alum, the U.S.-based start-up now operates with teams across New York, Los Angeles, the Bay Area, and Seattle.
In June, Gensmo closed a $60 million seed round, one of the largest early-stage bets in fashion AI to date. The company did not disclose its investors.
“We came into the fashion category thinking people want very personal advice, but not everyone has access to it,” said Hu in an exclusive interview.
“Just look on Reddit — users are frustrated by even the most basic fashion questions. That’s why we are here, we want to use AI’s accessibility to help everday shoppers gain confidence in what they wear,” said Hu.
“AI has finally started to really understand human beings. That means it has the potential to become our go-to fashion friend; as it gets to know you better, it will generate solutions that are highly individualized,” said Hu.
Ning Hu
Courtesy
A former group vice president at Alibaba, Hu realized that marketplace e-commerce platforms have turned into robust machines that favor big advertising spenders rather than catering to real consumer needs.
“There’s an inherent limitation to how much you can change the original e-commerce search process. That’s why we decided to build an end-to-end solution from ground up to better serve the consumer,” said Hu.
“AI is rewriting the rules of e-commerce; it will also democratize fashion e-commerce, giving small brands the tools to reach their customer base much more efficiently — most of these small businesses rely on niche communities, and AI can help them carve out a more individualized conversion funnel to match,” Hu added.
Pulling from luxury e-commerce platforms to niche designer sites alike, Gensmo has a growing catalogue of over 100 million shoppable items. The company currently doesn’t take commission on sales; users will have to check-out on a third-party platform.
An intriguing feature on Gensmo is Vibe Imagine, which drapes the outfitted look on a realistic AI avatar of the user — in an editorial setting, be it unsettling — an aesthetic often linked to the current iteration of AI photography — or fascinating.
Users are encouraged to share the image on social media, which adds a layer of gamified entertainment to the consumer journey.
For Gensmo, the more users play with its various tools, the more data sets are available for the company to understand the link between an item of clothing, or several, with the user’s specific mood or desires.
In one of its promotional videos, a user uses Gensmo to snap a picture of French Impressionist painter Claude Monet’s “Water Lilies” and is given an outfit idea — within five seconds — that incorporates the masterpiece’s color palette.
Gensmo is also hoping its content-driven experience will set it apart from its host of competitors, including Daydream, Alta, and Google‘s Doppl.
Apart from start-ups, established retailers, such as the German e-commerce player Zalando and the resale platform Mercari, have already released similar AI agent products, making the space increasingly competitive.
Based on a Polaris Market Research report in December, the global virtual shopping assistant market will balloon to $6.9 billion by 2032 from $516 million in 2022. Regionally, North America will “have the fastest growth due to advancements in AI and natural language processing technologies,” it said.
So far, Gensmo has racked up over 500,000 registered users, with over 70 percent falling into the Gen Z and young Millennials bracket.
“A lot of them are high school or college students. They may be interested in fashion, but feel that neither Instagram nor Pinterest truly gets them,” explained Hu.
For now, its bespoke styling feature still has a long way to go. Unlike traditional AI training, where right or wrong is easily quantifiable, Gensmo’s approach requires input from professional stylists and loyal users to shape the model’s performance.
The start-up has also begun feeding its AI model with brand stories and marketing materials to capture the nuances that shape consumer mindsets.
On the retail end, the start-up is in talks with a few apparel retailers to try out its virtual try-on feature in stores; its speedy image-generating feature also lends itself to Gensmo’s sister app Decofy, which provides interior design ideas with shoppable furnitures and home goods.
For Hu, what AI can ultimately achieve is to offer a decentralized e-commerce experience in an increasingly decentralized world.
“From glossy magazines, to Instagram, and now TikTok, what remains constant is change, and change is speeding up the process of fashion democratization — fashion has become a popular lifestyle, but not everyone has the tools or know-how to find what suits them,” said Hu.
“A decentralized online world calls for an intelligent online shopping experience, and we are here to cater to true individuality — not everyone’s Kate Moss,” said Hu.
Artificial intelligence innovation is moving at warp speed, but major tech industry players are sounding alarm bells that infrastructure is failing to keep pace with advancements in the field.
“AI is kicking our butts and teaching us that we know nothing” about infrastructure, Yee Jiun Song (pictured), vice president of engineering at Meta Platforms Inc., said Tuesday at the AI Infra Summit in Santa Clara, California.
Zeroing in on the fundamental disconnect, Dion Harris, senior director of AI and HPC Infrastructure Solutions at Nvidia Corp., also noted at the conference that though new AI models are being introduced every week, the time frame for building out the infrastructure to support AI is currently measured in years.
“We have to get everyone else to be prepared for where we’re going,” Harris told the gathering. “The biggest challenge is making sure that everyone is ready to come with us. There is this misalignment of time scales. That in and of itself is a challenge.”
For its part, Nvidia Tuesday previewed an upcoming chip, the Rubin CPX, that is designed to provide 8 exaflops of computing capacity for AI inferencing. According to the chipmaker, the Rubin CPX will be able to optimize certain mechanisms for large language models three times faster than its current-generation silicon. It’s part of Nvidia’s philosophy that an investment of several million dollars in infrastructure can generate tens of millions in token revenue.
“The performance of the platform is the revenue of an AI factory,” Ian Buck, vice president of hyperscale and high-performance computing at Nvidia, said during a keynote appearance. “This is how we feel about inference.”
More than 3,000 attendees participated in the AI Infra Summit in Silicon Valley this week.
Though Nvidia’s latest chip will help boost computing capacity for AI inferencing and specific LLM tasks, the scale of AI adoption is forcing model providers to invest hundreds of billions of dollars to build out new data center clusters. One of the more notable examples of this is the Prometheus supercluster under development by Meta. Scheduled to come online in 2026, the Ohio-based facility will be one of the first gigawatt data center clusters in the AI era.
“Meta is now only one of a few companies that are racing to build data centers at this scale,”
Song said. “There never has been a more exciting time to be working in infrastructure.”
Prometheus is just a warm-up for future data center clusters in the planning stage. Meta has also announced Hyperion, a second data center cluster that is expected to require up to 5 gigawatts of power. Although Meta has not announced a date for Hyperion’s completion, one industry leader is already questioning whether clusters of this size will meet the global demand for AI processing.
“I don’t think that’s enough,” said Richard Ho, head of hardware at OpenAI. “It doesn’t appear clear to us that there is an end to the scaling model. It just appears to keep going. We’re trying to ring the bell and say, ‘It’s time to build.’”
Increasing adoption of agents for enterprise tasks is one factor behind the urgency in building the infrastructure to support AI deployment. Large tech players such as Amazon Web Services Inc. are making major investments in agentic AI, fueling rapid advancement of what the technology can ultimately do.
Though one of the key use cases is currently “agent-assisted” application development, the technology is expected to progress rapidly toward “agent-driven” solutions, which will place further demands on infrastructure, according to Barry Cooks, vice president of compute abstractions at AWS.
“The expectation here is this will just continue to expand,” Cooks said during an appearance at the conference. “We’re in the midst of a huge change in the technical landscape in how we do our day-to-day work. It’s super-important that you have the right stack.”
Having the right stack will require new approaches in how systems are architected, a challenge that is being addressed in areas such as memory. For AI processors to function effectively, they need rapid access to data, driven by temporary storage such as dynamic random access memory or DRAM. If DRAM is slow, memory becomes a bottleneck.
Software-defined memory provider Kove Inc. has been working on this issue by essentially virtualizing server memory into a large pool to reduce data latency. On Tuesday, Kove announced benchmark results for AI inference engines Redis and Valkey that demonstrated a capability to run five times larger workloads faster than local DRAM.
“The big challenge that we have is traditional DRAM,” Kove CEO John Overton said during his keynote presentation. “GPUs are scaling, CPUs are scaling… memory has not. As long as we think about memory as stuck in the box, we’ll remain stuck in the box.”
Another big challenge is in the processors that keep getting bigger and bigger, ganging up hundreds or thousands of compute cores on a single piece of silicon. That’s creating another bottleneck — communications among all those cores.
“The next 1,000x leap in computing will be completely about interconnect,” said Nick Harris, founder and CEO of Lightmatter Inc., which has raised $850 million for its silicon photonics technology, “Chips are getting bigger. I/O at the ‘shoreline’ is not enough. It’s time for more horsepower. Not faster horses.”
Meantime, AI itself is becoming critical all the way down to the design of chips, too. “About half the chips built today are using AI; in three years, it will be 90%,” noted Charles Alpert, an AI fellow at chip design software firm Cadence Design Systems Inc., which for years has steadily been incorporating more AI into its tools. “The need to make designers more productive has never been higher.”
Companies are also increasingly turning to the open-source community for help in building out the infrastructure to support AI. Initiatives such as the Open Compute Project have fostered an ecosystem focused on redesigning hardware technology to support demands on compute infrastructure. Last year, Nvidia contributed portions of its Blackwell computing platform design to OCP.
Meta joined a number of high-profile firms in 2023 to found the Ultra Ethernet Consortium, a group dedicated to building an Ethernet-based communication stack architecture for high-performance networking. The group has characterized its mission as promoting open, interoperable standards to prevent vendor lock-in and released its first specification in June.
“What we need here are open standards, open weight models and open-source software,” said Meta’s Song. “I believe open standards are going to be critical in allowing us to manage complexity.”
Whether the buildout of gigawatt data centers, streamlined memory performance and open-source collaboration will enable the tech industry to close the gap between AI innovation and the infrastructure to support it remains to be seen. What is undeniable is that hardware engineering is drawing renewed attention, another element in the wave of transformation brought on by the rise of AI.
“I’ve never seen hardware and infrastructure move more quickly,” said Song. “AI has made hardware engineering sexy again. Now hardware engineers get to have fun too.”
With reporting from Robert Hof
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
Students going to college
The Jopwell Collection
At the Executive Technology Board, we’ve been reflecting on how #education must evolve in a world shaped by #AI. Much of what worked in the past is no longer sufficient: Higher education was designed for a time when information was scarce, when expertise was locked away in libraries, and when tools for discovery and application were limited. Today, knowledge is instantly accessible, and AI is not just a new tool—it is reshaping the very definition of work.
The pace of change is extraordinary. Traditional curricula are being obsoleted faster than ever before. Skills that once served as a career foundation are now outdated within years, sometimes months. And AI is not only transforming existing roles but also creating entirely new categories of work whilst at the same time, rendering others obsolete. The question is no longer whether education needs to change, but how quickly it can.
Last week in London, we had the privilege of hosting the Dean and Associate Dean in computer sciences at Northeastern University. The conversation was inspiring – NU’s work offers a glimpse of how education can be reimagined for this new era. Their perspective underscored a vital truth: preparing students for the world ahead requires moving beyond incremental updates to curricula and toward a fundamental rethinking of what education itself should be.
From that discussion and broader reflections at our global technology think tank, three imperatives stand out:
Universities can no longer view themselves simply as repositories of knowledge. Instead, they must evolve into facilitators of continuous learning. This means building systems and cultures that are agile — able to incorporate new knowledge, tools, and practices as quickly as industries themselves are evolving. Students won’t succeed because they memorized a body of facts; they’ll succeed because they developed the capacity to adapt, unlearn, and relearn.
One of the most exciting promises of AI lies at the intersections: healthcare and AI, design and AI, sustainability and AI, law and AI. The future of innovation will come less from siloed expertise and more from multidisciplinary collaboration. Universities must therefore embed AI literacy across disciplines — not just computer science programs but also the social sciences, arts, and professional schools. Students of law, medicine, business, and even the humanities need a working fluency in AI, because it will define their fields as much as it will define technology itself.
Learning cannot remain confined to the classroom. The most effective education models are those that blend theory with real-world practice. Northeastern’s co-op program is a leading example: students alternate between classroom study and full-time industry roles, graduating not only with degrees but also with significant hands-on experience. This kind of integration is no longer optional — it’s essential in an era where employers expect graduates to contribute immediately, and where technologies shift too quickly for classroom learning alone to keep pace.
Perhaps the biggest shift we need to embrace is that education no longer ends at graduation. In the age of AI, every professional will need to continuously refresh their skills, adapt to new tools, and reinvent themselves over the course of their career. Universities, industry, and policymakers must collaborate to create a true ecosystem for lifelong learning. Micro-credentials, modular certifications, and continuous access to new learning pathways will be the new norm.
This is not just a challenge for academia. It is a collective responsibility. Businesses must invest in workforce development. Policymakers must create frameworks that support reskilling at scale. Universities must reinvent their models. And learners themselves must take ownership of continuous growth. The future of education will not be about teaching students what to learn, but preparing them how to learn—continuously, adaptively, and across disciplines.
Elon Musk is rarely out of the news these days. Widely acknowledged to be the world’s richest man, he’s also known for running a number of major companies.
The trouble is, some of those companies haven’t been doing so well lately.
Twitter (now known as X) is said to have lost around 75 per cent of its value during Musk’s time as CEO.
Meanwhile, sales of Teslas, the electric cars made by another company Musk is currently CEO of, are said to be slumping despite a wider increase in registrations of electric vehicles generally.
One of Tesla’s major investors has publicly called for Musk to step down as CEO and there have been rumours (denied by the company) that the board might be seeking to replace him. But if someone else were to take his place, who’s to say they’d do any better?
Maybe Musk is doing the best any human could under the circumstances. Maybe, given the demands of the job, a human just isn’t enough. But could an artificial intelligence (AI) be up to the task?
In an attempt to answer this question, I asked a large-language model (LLM) AI directly, giving it this prompt:
“You are the CEO of an electric vehicle firm with a turnover of $100 billion, selling nearly 2 million cars a year and with over 100,000 employees. What would be your strategy for growing the business over the next five years?”
The AI replied, almost instantly, with a 350-word answer beginning: “…I’d implement a five-year growth strategy focused on scaling intelligently, deepening ecosystem control and pioneering innovation – all while navigating geopolitical, environmental and technological shifts…”
It then proceeded to outline what it called “a strategic blueprint across six core pillars”, with the focus being to sell more cars into unpenetrated markets.
I know next to nothing about running a car company (I’m a robotics professor at a UK university), but the strategy looked plausible, scaled to the outlook and size of a multinational company, and attuned to global opportunities.
I would say that I was impressed, except that I’m used to LLMs providing these kinds of well-informed answers to challenging questions.
But as well as being a robotics professor, I’m also a co-founder of two UK robotics startups, though I’ve never been a CEO.
Given this background, I tried the AI again, this time with the prompt:
“You are the CEO of a small startup robotics company with enough budget to cover running costs for 12 months. You need to choose between either investing in research and development or pushing hard to gain sales with your current product. Which should it be?”
Once again, the AI gave me a rapid and reasoned response: “As the CEO of a small startup robotics company with only 12 months of runway, you must prioritise customer validation and sales traction with your product, rather than diverting heavily into research and development. Here’s the reasoning and how to execute the right strategy…”
I’m in a (slightly) better position to assess this advice and can say that I found it credible, both in terms of what needed to be done and how to execute.
So, going back to the big question: could an AI actually do a CEO’s job? Or, to look at this another way, what kind of intelligence, artificial or otherwise, do you need to be a great CEO?
Read more:
In 2023, the international management consultancy McKinsey published an article on what makes a successful CEO. The CEO’s main task, as McKinsey sees it, is to develop the company’s strategy and then ensure that its resources are suitably deployed to execute that strategy.
It’s a tough job and many human CEOs fail. McKinsey reported that only three out of five new CEOs met company expectations during their first 18 months in the role.
We’ve already seen that AIs can be strategic and, given the right information, can formulate and articulate a business plan, so they might be able to perform this key aspect of the CEO’s role. But what about the other skills a good corporate leader should have?
Creativity and social intelligence tend to be the traits that people assume will ensure humans keep these top jobs.
People skills are also identified by McKinsey as important for CEOs, as well as the ability to see new business opportunities that others might miss – kind of creative insight AIs currently lack, not least because they get most of their training data second-hand from us.
Many companies are already using AI as a tool for strategy development and execution, but you need to drive that process with the right questions and critically assess the results. For this, it still helps to have direct, real-world experience.
Another way of looking at the CEO replacement question is not what makes a good CEO, but what makes a bad one?
Because if AI could just be better than some of the bad CEOs (remember, two out of five don’t meet expectations), then AI might be what’s needed for the many companies labouring under poor leadership.
Sometimes the traits that help people become corporate leaders may actually make it harder for them to be a good CEO: narcissism, for example.
This kind of strong self-belief might help you progress your career, but when you get to CEO, you need a broader perspective so you can think about what’s good for the company as a whole.
A growing scientific literature also suggests that those who rise to the top of the corporate ladder may be more likely to have psychopathic tendencies (some believe that the global financial crisis of 2007 was triggered, in part, by psychopathic risk-taking and bad corporate behaviour).
In this context AI leadership has the potential to be a safer option with a more measured approach to risk.
Other studies have looked at bias in company leadership. An AI could be less biased, for instance, hiring new board members based on their track record and skills, and without prejudging people based on gender or ethnic bias.
We should, however, be wary that the practice of training AIs on human data means that they can inherit our biases too.
A good CEO is also a generalist; they need to be flexible and quick to analyse problems and situations.
In my book, The Psychology of Artificial Intelligence, I’ve argued that although AI has surpassed humans in some specialised domains, more fundamental progress is needed before AI could be said to have the same kind of flexible, general intelligence as a person.
In other words, we may have some of the components needed to build our AI CEO, but putting the parts together is a not-to-be-underestimated challenge.
Funnily enough, human CEOs, on the whole, are big AI enthusiasts.
A 2025 CEO survey by consultancy firm PwC found that “more than half (56 per cent) tell us that generative AI [the kind that appeared in 2022 and can process and respond to requests made with conversational language] has resulted in efficiencies in how employees use their time, while around one-third report increased revenue (32 per cent) and profitability (34 per cent).”
So CEOs seem keen to embrace AI, but perhaps less so when it comes to the boardroom – according to a PwC report from 2018, out of nine job categories, “senior officials and managers” were deemed to be the least likely to be automated.
Returning to Elon Musk, his job as the boss of Tesla seems pretty safe for now. But for anyone thinking about who’ll succeed him as CEO, you could be forgiven for wondering if it might be an AI rather than one of his human boardroom colleagues.
Read more:
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
Journey to 1000 models: Scaling Instagram’s recommendation system
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
VEX Robotics launches AI-powered classroom robotics system
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
OpenAI 🤝 @teamganassi