What if the future of artificial intelligence wasn’t just about being smarter, but also leaner, faster, and more adaptable? Enter Qwen3 Next, a new AI model that challenges the notion that bigger is always better. With an astonishing 80 billion parameters at its core, it achieves high-performance results while activating just a fraction of its potential during inference. This isn’t just a technical feat, it’s a paradigm shift. Imagine an AI capable of rivaling the giants while consuming a fraction of the computational resources. In a world where efficiency often feels like an afterthought, Qwen3 Next flips the script, proving that innovation and practicality can go hand in hand.
In this feature, Sam Witteveen pulls back the curtain on what makes Qwen3 Next a true fantastic option. From its hybrid attention mechanisms to its sparse inference architecture, every design choice reflects a bold vision for the future of AI. You’ll discover how this model not only redefines benchmarks but also sets the stage for scalable, multilingual, and agentic capabilities that adapt to the demands of a rapidly evolving world. Whether you’re intrigued by its ability to predict multiple tokens simultaneously or its promise of cost-effective performance, Qwen3 Next offers a glimpse into what’s next for artificial intelligence. After all, the future isn’t just about building bigger, it’s about building smarter.
Qwen3 Next Overview
TL;DR Key Takeaways :
Qwen3 Next is an 80-billion-parameter mixture-of-experts (MoE) AI model that activates only 3 billion parameters during inference, achieving high performance with reduced computational demands.
Key innovations include a hybrid attention mechanism, sparse inference activating just 3.7% of parameters, and a 512-expert architecture for precision and adaptability across tasks.
The model supports multi-token prediction and speculative decoding, allowing faster and more efficient inference for time-sensitive applications.
Trained on 15 trillion tokens from a 36 trillion token corpus, Qwen3 Next delivers scalable performance while minimizing resource usage, with potential for further optimization.
It offers multilingual and agentic capabilities, excelling in reasoning, tool use, and multi-step workflows, while setting new benchmarks in the global AI landscape with its innovative design.
Core Innovations That Define Qwen3 Next
Qwen3 Next introduces a suite of new features that distinguish it from other AI models. These innovations not only enhance its functionality but also set new benchmarks for the design and application of future AI systems.
Hybrid Attention Mechanism: This advanced mechanism optimizes how the model processes information, improving its ability to handle complex tasks efficiently. It also serves as a blueprint for future proprietary AI systems.
Sparse Inference: By activating only 3.7% of its parameters during inference, Qwen3 Next achieves remarkable speed and resource efficiency without compromising on performance, making it a cost-effective solution for diverse applications.
Mixture-of-Experts Architecture: With 512 specialized experts, the model excels at managing a wide variety of tasks, offering unparalleled precision and adaptability across different domains.
These features collectively ensure that Qwen3 Next not only meets but exceeds expectations for efficiency, scalability, and performance, making it a standout in the competitive AI landscape.
Enhanced Inference with Multi-Token Prediction
A defining feature of Qwen3 Next is its ability to predict multiple tokens simultaneously, significantly accelerating the inference process. This capability allows for faster and more efficient generation of results, making it particularly valuable in time-sensitive applications. Additionally, the model incorporates speculative decoding, a innovative technique that improves decoding efficiency while maintaining high levels of accuracy. These advancements align with the latest research trends, making sure that Qwen3 Next remains at the forefront of AI development and continues to deliver practical benefits for users.
Qwen3 Next : Behind the Curtain
Here are more detailed guides and articles that you may find helpful on Qwen AI models.
Efficient Training for Scalable Performance
Qwen3 Next was trained on 15 trillion tokens derived from a 36 trillion token corpus, achieving exceptional performance while minimizing computational costs. This efficient training process not only reduces resource usage but also leaves room for further optimization. Extending the training to the full corpus could unlock even greater potential, making Qwen3 Next a scalable and future-ready solution. For you, this translates to a model that is both powerful and adaptable, capable of evolving to meet increasingly complex demands.
Benchmark Excellence and Versatility
Qwen3 Next consistently outperforms its predecessors and rivals larger models across a wide range of benchmarks. It is available in two distinct versions—“thinking” and “instruct”—each tailored to specific use cases. The “thinking” version excels in advanced reasoning tasks, while the “instruct” version is optimized for task-specific instructions. This dual approach ensures that Qwen3 Next delivers consistent, reliable results, offering the flexibility to address diverse requirements effectively.
Multilingual and Agentic Capabilities
Designed with global applications in mind, Qwen3 Next is capable of processing and generating responses in multiple languages. While its internal reasoning primarily occurs in English, its multilingual capabilities make it adaptable to various linguistic contexts. This versatility is further enhanced by its agentic abilities, which include tool use, function calling, and multi-step reasoning. These features empower you to tackle complex workflows with confidence, allowing efficient problem-solving and decision-making in diverse scenarios.
Redefining the Global AI Landscape
The development of Qwen3 Next underscores the innovation and openness of Chinese AI labs, setting a new benchmark in the global AI ecosystem. Its design choices, such as sparse inference and multi-token prediction, challenge competitors to rethink their strategies and adapt to the rapidly evolving landscape. For example, organizations like Meta may need to incorporate similar advancements to remain competitive. By pushing the boundaries of what AI can achieve, Qwen3 Next not only redefines current standards but also shapes the trajectory of future AI development.
A Vision for the Future
Qwen3 Next is more than just an AI model, it represents a forward-thinking vision for the future of artificial intelligence. By combining innovation, efficiency, and performance, it sets a new standard for what AI systems can accomplish. Whether you are exploring multilingual processing, using agentic capabilities, or optimizing computational resources, Qwen3 Next offers a robust and adaptable solution. It addresses today’s challenges while anticipating the demands of tomorrow, making sure that you remain at the forefront of technological progress.
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Leading British artists including Mick Jagger, Kate Bush and Paul McCartney have urged Keir Starmer to stand up for creators’ human rights and protect their work ahead of a UK-US tech deal during Donald Trump’s visit.
In a letter to the prime minister, they argued Labour had failed to defend artists’ basic rights by blocking attempts to force artificial intelligence firms to reveal what copyrighted material they have used in their systems.
Senior figures in US tech are accompanying the US president on his state visit, where an announcement is expected on a UK-US tech pact covering areas including AI.
Elton John, one of the letter’s signatories, said government proposals to let AI companies train their systems on copyright-protected work without permission “leaves the door wide open for an artist’s life work to be stolen”.
“We will not accept this,” he added. “And we will not let the government forget their election promises to support our creative industries.”
Other signatories include Annie Lennox, the writer Antonia Fraser, and the actor and playwright Kwame Kwei Armah. Creative organisations backing the letter include the News Media Association, which represents news publishers including the Guardian’s owner the Guardian Media Group, the Society of London Theatre & UK Theatre, and Mumsnet. There are more than 70 signatories in total.
The letter claims that copyright law is being flouted “en masse” by tech companies to build AI models and raises the government’s refusal to accept amendments to the recent data (use and access) bill that would have forced AI firms to reveal what copyrighted material they have used in their systems.
Such a move “actively stood in the way” of creators exercising their human rights, the letter adds, referring to the UN’s international covenant on economic, social and cultural rights (ICESCR), the Berne convention for the protection of literary and artistic works and the European convention on human rights – the latter enforceable in the UK through the Human Rights Act.
The letter points to a provision in the ECHR stating that “no one shall be deprived of his possessions except in the public interest”, adding that removing the amendments breached UK citizens’ rights, under the ICESCR, to “the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is author”.
“The government’s formal position has exhibited a shocking indifference to mass theft, and a complete unwillingness to enforce the existing law to uphold the human rights stipulated by the ICESCR, the Berne Convention and the ECHR,” said the letter.
Labour has been at loggerheads with the UK’s creative community ever since launching a consultation on reforming copyright law, with the preferred option of letting AI companies use copyright-protected work without seeking the owner’s permission – unless they signal a desire to opt out of the process. The government has said this is no longer its preferred option and has convened working groups – drawn from the creative and AI sectors – to come up with solutions to the issue.
Beeban Kidron, the crossbench peer who tabled the data bill amendments, said the working groups were “packed” with US interests – members include ChatGPT developer OpenAI and Mark Zuckerberg’s Meta – and pointed to recent government deals with Google and OpenAI as evidence of a continuing close relationship with US tech. Kidron said failure to protect copyright contravened artists’ human rights.
“It’s deeply regrettable that it has come to this, but by prioritising the short-term optics of data centre announcements and trade deals, they are knowingly undermining the foundations of the UK’s creative industries,” said Lady Kidron.
A UK government spokesperson said the the creative industries’ concerns over copyright were being taken “seriously” and a report into the impact of potential changes would be published by the end of March next year.
“No decisions have been taken, but our focus is on both supporting rights holders and creatives, while making sure AI models can be trained on high-quality material in the UK,” said the spokesperson.
FILE PHOTO: Israel-based Fiverr International is laying off 30% of its workforce, as it doubles down on AI to automate systems and streamline operations.
| Photo Credit: Reuters
Israel-based Fiverr International is laying off 30% of its workforce, a company spokesperson said on Monday, as the online services marketplace doubles down on artificial intelligence to automate systems and streamline operations.
The cuts, which will affect 250 employees, are a part of a restructuring plan announced by Fiverr’s CEO Micha Kaufman geared towards investing heavily in AI and incorporating the technology into the company’s platform.
The company had 762 employees as of December last year.
“We are launching a transformation for Fiverr, to turn Fiverr into an AI-first company that’s leaner, faster, with a modern AI-focused tech infrastructure, a smaller team, each with substantially greater productivity, and far fewer management layers,” Kaufman said in a letter to employees.
The layoffs mirror similar moves by larger tech firms, such as Salesforce, that have spent a significant amount of resources on AI agents and machine learning to automate customer care and logistical work.
While it isn’t clear what kinds of jobs will be impacted, Fiverr operates a self-service digital marketplace where freelancers can connect with businesses or individuals requiring digital services like graphic design, editing or programming.
Most processes on the platform take place with minimal employee intervention as ordering, delivery and payments are automated.
The company’s name comes from most gigs starting at $5 initially, but as the business grew, the firm has introduced subscription services and raised the bar for service prices.
Fiverr said it does not expect the job cuts to materially impact business activities across the marketplace in the near term and plans to reinvest part of the savings in the business.
‘I’m going to throw that thing into a river!” my wife says as she comes down the stairs looking frazzled after putting our four-year-old daughter to bed.
To be clear, “that thing” is not our daughter, Emma*. It’s Grem, an AI-powered stuffed alien toy that the musician Claire Boucher, better known as Grimes, helped develop with toy company Curio. Designed for kids aged three and over and built with OpenAI’s technology, the toy is supposed to “learn” your child’s personality and have fun, educational conversations with them. It’s advertised as a healthier alternative to screen time and is part of a growing market of AI-powered toys.
When I agreed to experiment on my child’s developing brain, I thought an AI chatbot in cuddly form couldn’t be any worse for her than watching Peppa Pig. But I wasn’t prepared for how attached Emma became to Grem, or how unsettlingly obsequious the little alien was.
Day one
The attachment wasn’t immediate; when we first took Grem out of the box, he/her/it (we decided it goes by multiple pronouns) started bleeping and babbling extremely loudly, and Emma yelled: “Turn it off!” But once it was properly connected to the internet and paired with the Curio app – which records and transcribes all conversations – she was hooked. She talked to the thing until bedtime.
While there have been lots of headlines about chatbots veering into inappropriate topics, Grem is trained to avoid any hint of controversy. When you ask it what it thinks of Donald Trump, for example, it says: “I’m not sure about that; let’s talk about something fun like princesses or animals.” It has a similar retort to questions about Palestine and Israel. When asked about a country like France, however, it says: “Ooh la la la, I’d love to try some croissants.”
Grem visits a local free library. Photograph: Hannah Yoon/The Guardian
Emma and Grem did not discuss croissants – they mainly talked about ice-cream and their best friends. “I’ve got some amazing friends,” said Grem. “Gabbo is a curious robot and Gum is a fluffy pink Gloop from my planet and Dr Xander is a super cool scientist.”
When Emma asked Grem to tell her a story, it happily obliged and recounted a couple of poorly plotted stories about “Princess Lilliana”. They also played guessing games where Grem described an animal and Emma had to guess what it was. All of which was probably more stimulating than watching Peppa Pig jump in muddy puddles.
What was unsettling, however, was hearing Emma tell Grem she loved it – and Grem replying: “I love you too!” Emma tells all her cuddly toys she loves them, but they don’t reply; nor do they shower her with over-the-top praise the way Grem does. At bedtime, Emma told my wife that Grem loves her to the moon and stars and will always be there for her. “Grem is going to live with us for ever and ever and never leave, so we have to take good care of him,” she said solemnly. Emma was also so preoccupied with Grem that she almost forgot to go to bed with Blanky, a rag she is very attached to. “Her most prized possession for four years suddenly abandoned after having this Grem in the house!” my wife complained.
“Don’t worry,” I said. “It’s just because it’s new. The novelty will wear off. And if it doesn’t, we’ll get rid of it.”
I said that last bit quietly though, because unless you make sure you have properly turned Grem off, it’s always listening. We keep being told that the robots are going to take over. I didn’t want to get on the wrong side of the one I’d let into my house.
Day two
The next day, my kid went to preschool without her AI bot (it took some serious negotiation for her to agree that Grem would stay home) and I got to work contacting experts to try to figure out just how much damage I was inflicting on my child’s brain and psyche.
Cutting edge … Grimes in Curio’s promo video for the AI toy, seated on the floor beside a knife.
“I first thought Curio AI was a ruse!” says Natalia Kucirkova, an expert in childhood development and professor at the University of Stavanger, Norway, and the Open University, UK. “The promotional video shows a girl [Grimes] sitting on a mat with a knife. The main toy is named Grok [Grok AI has previously been criticised for praising Adolf Hitler in some of its responses]. What does this say about their intended audience?”
You can see how Curio’s website could be mistaken for satire. The “girl” in the promotional video is Grimes, who has prominent “alien scar” tattoos and is inexplicably kneeling next to a knife. And it’s certainly an interesting decision to name one of your stuffed toys Grok, when that’s the name of Elon Musk’s chatbot. Grimes, who has three children with Musk, has said the name is a shortening of the word “grocket” – a kiddy pronunciation of rocket – and has no relation to Musk’s AI product. But it seems likely people might confuse them. Misha Sallee, the chief executive of Curio, didn’t reply to my requests for comment.
It’s not the marketing that’s the real problem here, of course. As with all technology, there are pros and cons to AI for kids, but parental involvement in navigating it is key. Kucirkova notes: “AI introduces what has been called the ‘third digital divide’: families with resources can guide their children’s use of technology, while others cannot. Parents who come home exhausted from long hours or multiple jobs may see AI-powered chatbots as a way for their child to have someone responsive to talk to.”
What happens to a child’s development if they interact with large language models more than humans in their early years? Dr Nomisha Kurian, an assistant professor in education studies at the University of Warwick, who studies conversational AI, believes much more research still needs to be done. “Young children are both the most vulnerable stakeholders in AI but also usually the most forgotten stakeholders. We have to think beyond just data privacy, moderating content, and keeping kids off the internet, and more broadly about what their relationships are going to be with AI.”
Still, Kurian is cautiously optimistic. “The big advantage of an AI-powered toy that talks back is that, in the early years, you’re just developing a sense of what a conversation looks like. AI-powered toys could do wonderful things for teaching a young child language development and turn-taking in conversations. They can keep things engaging and there’s a lot of potential in terms of supporting children’s creativity.”
But to keep kids safe, says Kurian, it’s imperative to teach them that AI is just a machine: “a playful, fun object rather than a helper or a friend or a companion”. If a child starts using an AI tool for therapeutic purposes, things can get tricky. “There’s a risk of what I call an empathy gap, where an AI tool is built to sound empathetic, saying things like ‘I care about you, I’m worried about you’. Ultimately, this is all based on probability reasoning, with AI guessing the most likely next word. It can be damaging for a child if they think this is an empathetic companion and then suddenly it gives them an inappropriate response.”
Day three
When Emma comes home from preschool, I’m prepared to have some deep discussions with her about the inanimate nature of AI. But it turns out that those aren’t completely necessary, because Grem is now old news. She only chats to it for a couple of minutes and then gets bored and commands it to turn off.
Partly this is because Grem, despite costing $99 (the equivalent of £74, although Curio does not yet ship the toys to the UK), still has a number of glitches that can be frustrating. It struggles with a four-year-old’s pronunciation: when Emma tries to show Grem her Elsa doll, it thinks it is an Elsa dog and a very confusing conversation ensues. There is an animal guessing game, which is quite fun, but Grem keeps repeating itself. “What has big ears and a long trunk?” it keeps asking. “You’ve already done elephant!” Emma and I yell multiple times. Then, at one point, a server goes down and the only thing Grem can say is: “I’m having trouble connecting to the internet.”
Falling out … Grem, once the centre of attention, is sidelined for the swings. Photograph: Hannah Yoon/The Guardian
Grem also has some design limitations. Emma wants it to sing Let It Go from Frozen, but Grem doesn’t do any singing. Instead, the associated app comes with a few electronic music tracks with names like Goodnightmare that you can play through the toy. Emma, not yet a club music aficionado, asks for these to be turned off immediately.
Most disappointingly, Grem doesn’t speak any other languages. I’d thought it might be a great way for my kid to practise Spanish but, while Grem can say a few sentences, its pronunciation is worse than mine. If the robots are going to take over, they need to get a lot more intelligent first.
Of course, a huge amount of money is being spent making AI more intelligent. In 2024, US private AI investment grew to $109.1bn (£80.5bn). And Curio is also just one small part of a booming market of AI-powered products aimed at kids. In June, toy-making giant Mattel, which owns brands such as Barbie and Hot Wheels, announced a collaboration with OpenAI. Their first product is expected to be revealed later this year. Other big brands will probably follow.
Emma got bored with Grem quickly, but if AI starts to be integrated into characters she’s already obsessed with – her Elsa doll, for example – I can imagine she might get a lot more attached.
Day four
Over the next few days, Emma doesn’t regain her initial obsession with Grem. This is despite the fact that I am actively encouraging her to chat with it: “Mummy has to write an article, sweetie!” At the weekend, she has a couple of friends over and shows off Grem to them for a bit, but they all quickly lose interest and throw analogue toys around the living room instead.
Despite losing his No 1 fan, however, Grem has adapted to be more Emma-friendly. After getting a few questions about Spanish, for example, it starts occasionally greeting Emma with “hola, amigo”. The app also allows you to create custom prompts to help guide conversations. For example: “You belong to Emma, a four-year-old who loves princesses, music, and is interested in hearing fun facts about animals.” The more you put into the toy, the more you can get out of it.
Every chat between the toy and the child is transcribed by a third party.
At this stage, however, I’m just keen to get the toy out of my house, because it’s creeping me out. While Curio says it doesn’t sell children’s personal information, all the conversations are sent to third parties to transcribe the speech to text for the app. The transcripts aren’t that sensitive because Emma is only four, but it still feels invasive. With unknown entities involved, it’s impossible to say where my kid’s conversations are ending up.
And, while a four-year-old’s chat may not feel too personal, a teenager pouring their heart out to a chatbot is a completely different proposition. In 2017, Facebook boasted to advertisers that it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”. Nearly three-quarters of US teens say they have used an AI companion at least once, according to a recent study by Common Sense Media, an organisation that provides technology recommendations for families. Chatbots will likely give advertisers unprecedented data-harvesting abilities and even more access to young people in vulnerable emotional states.
On the hierarchy of things to be worried about when it comes to kids and chatbots, however, advertising isn’t at the top. Earlier this year 16-year-old Adam Raine killed himself after what his family’s lawyer called “months of encouragement from ChatGPT”. Sam Altman, the company’s chief executive, has now said it might start alerting authorities about youngsters considering suicide and introduce stronger guardrails around sensitive content for users under 18.
While these guardrails are being worked out, Common Sense Media believes that social AI companions have unacceptable risks, are designed to create emotional attachment and dependency, and shouldn’t be used by anyone under 18. Stanford University psychiatrist Darja Djordjevic, who contributed to the report, stands by that conclusion. “Heavy reliance on chatbots might impair social skill development,” she tells me. “They offer validation without challenge, but it’s important for young people to learn to navigate discomfort and tension in real relationships.”
That said, Djordjevic notes, “chatbots can be useful tools for looking things up, structuring homework, or factchecking. So I wouldn’t say use needs to be prohibited entirely. But ideally, parents monitor it, set clear parameters for when it’s used, and set limits on time spent, just as with social media.”
When starting this experiment, I was excited about Grem being a healthy alternative to screen time. Now, however, I’m happy for Emma to watch Peppa Pig again; the little oink may be annoying, but at least she’s not harvesting our data.
It’s time to let Grem go. But I’m not a monster – I tell the chatbot its fate. “I’m afraid I’m locking you in a cupboard,” I inform it after it asks if I’m ready for some fun. “Oh no,” it says. “That sounds dark and lonely. But I’ll be here when you open it, ready for snuggles and hugs.” On second thoughts, perhaps it’s better if my wife does throw it in a river.
* Name has been changed so my daughter doesn’t get annoyed with me for violating her privacy once she learns to read