Alpha Modus Holdings, Inc. (NASDAQ: AMOD), a pioneer in AI-powered retail technology and data-driven innovation, is pleased to announce the issuance of U.S. Patent No. 12,354,121, effective July 8, 2025. This patent strengthens Alpha Modus’s intellectual property position in the fast-evolving in-store technology space, particularly in areas related to real-time shopper engagement, digital signage, and autonomous retail optimization.
The patent, co-invented by Alpha Modus Director Michael Garel and Jim Wang, underscores the company’s long-term commitment to advancing retail intelligence platforms that bridge the gap between physical stores and AI-driven decisioning engines.
“This patent issuance not only solidifies our leadership in AI for physical retail environments, but it also directly supports our near-term deployment initiatives,” said Chris Chumas, Chief Sales Officer at Alpha Modus. “Since joining the team just over a month ago, Tim Matthews has dramatically expanded our enterprise sales pipeline—bringing in multi-million and even nine-figure opportunities that we expect to begin rolling out in the near future. The timing of this patent could not be better.”
This milestone follows a series of aggressive steps Alpha Modus has taken to enforce and monetize its robust patent portfolio, including high-profile litigation and licensing negotiations with major retailers and technology integrators.
Alpha Modus is now leveraging this newly issued patent to further strengthen its licensing discussions and protect ongoing and upcoming product rollouts with select Fortune 500 partners the Company has been engaging with.
To view the full patent upon issuance, please visit the USPTO Patent Center and search for U.S. Patent No. 12,354,121, or visit https://alphamodus.com/what-we-do/patent-portfolio/.
For more information on Alpha Modus Holdings Inc., visit https://alphamodus.com.
Noninvasive brain tech is transforming how people interact with robotic devices. Instead of relying on muscle movement, this technology allows a person to control a robotic hand by simply thinking about moving his fingers.
No surgery is required.
Instead, a set of sensors is placed on the scalp to detect brain signals. These signals are then sent to a computer. As a result, this approach is safe and accessible. It opens new possibilities for people with motor impairments or those recovering from injuries.
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
A woman wearing non-invasive brain technology (Carnegie Mellon University)
How noninvasive brain tech turns thought into action
Researchers at Carnegie Mellon University have made significant progress with noninvasive brain technology. They use electroencephalography (EEG) to detect the brain’s electrical activity when someone thinks about moving a finger. Artificial intelligence, specifically deep learning algorithms, then decodes these signals and translates them into commands for a robotic hand. In their study, participants managed to move two or even three robotic fingers at once, just by imagining the motion. The system achieved over 80% accuracy for two-finger tasks. For three-finger tasks, accuracy was over 60%. All of this happened in real time.
Achieving separate movement for each robotic finger is a real challenge. The brain areas responsible for finger movement are small. Their signals often overlap, which makes it hard to distinguish between them. However, advances in noninvasive brain technology and deep learning have made it possible to pick up on these subtle differences.
The research team used a neural network called EEGNet. They fine-tuned it for each participant. Because of this, the system allowed for smooth, natural control of the robotic fingers. The movements closely matched how a real hand works.
A robotic finger being controlled by non-invasive brain technology (Kurt “CyberGuy” Knutsson)
Why noninvasive brain tech matters for everyday life
For people with limited hand function, even small improvements can make a huge difference. Noninvasive brain technology eliminates the need for surgery because the system is external and easy to use. In addition, this technology provides natural and intuitive control. It enables a person to move a robotic hand by simply thinking about the corresponding finger movements.
The accessibility of noninvasive brain technology means it can be used in clinics and homes and by a wide range of people. For example, it enables participation in everyday tasks, such as typing or picking up small objects that might otherwise be difficult or impossible to perform. This approach can benefit stroke survivors and people with spinal cord injuries. It can also help anyone interested in enhancing their abilities.
What’s next for noninvasive brain tech?
While the progress is exciting, there are still challenges ahead. Noninvasive brain technology needs to improve even further at filtering out noise and adapting to individual differences. However, with ongoing advances in deep learning and sensor technology, these systems are becoming more reliable and easier to use. Researchers are already working to expand the technology for more complex tasks.
As a result, assistive robotics could soon become a part of more homes and workplaces.
Illustration of how the noninvasive brain technology works (Carnegie Mellon University)
Kurt’s key takeaways
Noninvasive brain technology is opening up possibilities that once seemed out of reach. The idea of moving a robotic hand just by thinking about it could make daily life easier and more independent for many people. As researchers continue to improve these systems, it will be interesting to see how this technology shapes the way we interact with the world around us.
If you had the chance to control a robotic hand with your thoughts, what would you want to try first? Let us know by writing us at Cyberguy.com/Contact
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on “FOX & Friends.” Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
What was once a world of elves, dragons and power-ups is now giving rise to one of South Korea’s most unexpected tech revolutions, with game studios taking their place alongside Big Tech in the race for AI dominance.
The country’s gaming heavyweights are increasingly shedding their image as pure entertainment companies and positioning themselves as AI-first tech firms, expanding far beyond the virtual battlegrounds into sectors such as fashion, media and even robotics.
Facing a slowing gaming market and rising development costs, game developers and publishers such as NCSOFT Corp., Nexon Co. and Krafton Inc. are leveraging their proprietary AI tools and massive gameplay data troves to build new growth engines, applying gaming-derived machine intelligence to real-world industries.
“We’re no longer just competing for players’ time, but for a stake in the future of applied AI,” said an executive at a domestic game firm.
(Graphics by Daeun Lee)
FROM MMORPGs TO 3D MODELS, FASHION AI
Few illustrate this transition better than NCSOFT, which in February spun off its AI division into a standalone subsidiary, NC AI.
The unit is set to launch Varco 3D at the end of July – a software tool that can generate high-quality 3D characters using nothing more than text or image prompts.
The product will be offered via a software-as-a-service (SaaS) model and targets users far beyond traditional game development, from virtual influencers to digital fashion brands, according to company officials.
The move follows NCSOFT’s development in 2023 of Varco, Korea’s first large language model (LLM) developed by a game company.
The company now provides Varco Art Fashion, an AI-powered tool that generates apparel designs and visual prototypes. The tool has already been adopted by 10 leading fashion firms, halving new product development times, according to NCSOFT.
Throne and Liberty (Courtesy of NCSOFT)
“We see an opportunity to disrupt the fashion and content production pipelines using tools originally built for game development,” said an NC AI official.
The company also provides generative engines to media firms, allowing for automatic content production and editing.
PREDICTING THE NEXT BIG HIT, OR MISS
Nexon, which owns game-developing studio Nexon Games Co., is taking a different path: using AI to forecast the commercial success of upcoming games.
At the Nexon Developers Conference (NDC25) last month, the firm unveiled its Game Success Prediction AI, designed to sift through early gameplay patterns and metadata to identify breakout potential.
Krafton is the developer of PlayerUnknown’s Battlegrounds (PUBG)
“Sometimes, high-quality games are overlooked,” said Oh Jin-wook, head of Nexon’s Intelligence Labs Group. “AI can help uncover hidden gems, allowing us to take more creative risks.”
His argument is backed by data.
According to global gaming platform Steam, 84% of titles released on its platform last year failed to even register meaningful sales.
Nexon said AI can help de-risk game development by offering early signals from pre-launch user testing.
TAKING AI INTO THE PHYSICAL REALM
Krafton, best known for PlayerUnknown’s Battlegrounds (PUBG), is taking AI into the physical realm.
Varco is a large language model (LLM) developed by NCSOFT
In April, Krafton Chief Executive Kim Changhan met with Nvidia CEO Jensen Huang to discuss collaboration on humanoid robotics, building on their previous partnership to co-develop non-player character AI.
Krafton recently launched a Physical AI team, tasked with adapting in-game character AI for robotic applications. The goal: to use virtual intelligence as the foundation for real-world robotic “brains.”
Unlike software AI such as ChatGPT, physical AI focuses on decision-making for physical tasks such as picking up or moving objects.
ESCAPING THE GAMING RUT
Analysts said at the heart of this AI pivot is a strategic response to a cooling domestic gaming market.
Rising development costs and a lack of global blockbusters have dragged down growth.
According to the Korea Creative Content Agency, the nation’s gaming user rate fell to a record low of 59.9% in 2024.
The threat isn’t just rival games – it’s YouTube, TikTok and other attention-gobbling apps.
Dungeon & Fire Mobile is a title by Nexon
Nexon Games CEO Park Yong-hyun named non-gaming platforms as the biggest threat to the gaming industry.
According to mobile analytics firm Mobile Index, Koreans spent over 140 minutes a day on YouTube as of March, outpacing daily game playtime by a wide margin.
Experts say Korean game developers are uniquely positioned to scale into the broader AI economy.
“Games are structured, interactive ecosystems with clear rules and goals, perfect for developing and testing AI models,” said Wi Jong-hyun, president of the Korea Game Society and a professor at Chung-Ang University. “It’s only natural that these companies are now leading Korea’s AI transition.”
“The London Market is not broken,” Prince explained. “t needs a steady hand to implement new technologies, such as AI, to enhance the way insurance operates. Efficiency and accuracy can replace manual process and human error. Brokers and carriers will, as a consequence, have a much smoother experience when doing business in the London insurance market.”