Connect with us

Tools & Platforms

Alpha Modus Awarded New U.S. Patent Strengthening Its AI

Published

on


CORNELIUS, N.C., July 09, 2025 (GLOBE NEWSWIRE) — Alpha Modus Holdings, Inc. (NASDAQ: AMOD), a pioneer in AI-powered retail technology and data-driven innovation, is pleased to announce the issuance of U.S. Patent No. 12,354,121, effective July 8, 2025. This patent strengthens Alpha Modus’s intellectual property position in the fast-evolving in-store technology space, particularly in areas related to real-time shopper engagement, digital signage, and autonomous retail optimization.

AI Patent Granted

The patent, co-invented by Alpha Modus Director Michael Garel and Jim Wang, underscores the company’s long-term commitment to advancing retail intelligence platforms that bridge the gap between physical stores and AI-driven decisioning engines.

“This patent issuance not only solidifies our leadership in AI for physical retail environments, but it also directly supports our near-term deployment initiatives,” said Chris Chumas, Chief Sales Officer at Alpha Modus. “Since joining the team just over a month ago, Tim Matthews has dramatically expanded our enterprise sales pipeline—bringing in multi-million and even nine-figure opportunities that we expect to begin rolling out in the near future. The timing of this patent could not be better.”

This milestone follows a series of aggressive steps Alpha Modus has taken to enforce and monetize its robust patent portfolio, including high-profile litigation and licensing negotiations with major retailers and technology integrators.

Alpha Modus is now leveraging this newly issued patent to further strengthen its licensing discussions and protect ongoing and upcoming product rollouts with select Fortune 500 partners the Company has been engaging with.

To view the full patent upon issuance, please visit the USPTO Patent Center and search for U.S. Patent No. 12,354,121, or visit https://alphamodus.com/what-we-do/patent-portfolio/.

For more information on Alpha Modus Holdings Inc., visit https://alphamodus.com.


About Alpha Modus Holdings Inc.
Alpha Modus Holdings Inc. (NASDAQ: AMOD) is redefining how retailers connect with customers through its AI-powered platform that transforms in-store environments into intelligent, responsive experiences. With a strong patent portfolio and rapidly expanding enterprise pipeline, Alpha Modus is positioned to lead the next generation of physical retail innovation.

For more information and to access Alpha Modus’ press room, visit: https://alphamodus.com/press-room/

Forward-Looking Statements
This press release includes “forward-looking statements” within the meaning of the “safe harbor” provisions of the United States Private Securities Litigation Reform Act of 1995. Alpha Modus’s actual results may differ from their expectations, estimates, and projections, and, consequently, you should not rely on these forward-looking statements as predictions of future events. Words such as “expect,” “estimate,” “project,” “budget,” “forecast,” “anticipate,” “intend,” “plan,” “may,” “will,” “could,” “should,” “believes,” “predicts,” “potential,” “continue,” and similar expressions (or the negative versions of such words or expressions) are intended to identify such forward-looking statements, but are not the exclusive means of identifying these statements. These forward-looking statements include, without limitation, Alpha Modus’s expectations with respect to future performance.

Alpha Modus cautions readers not to place undue reliance upon any forward-looking statements, which speak only as of the date made. Alpha Modus does not undertake or accept any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements to reflect any change in its expectations or any change in events, conditions, or circumstances on which any such statement is based.

Contact Information

Investor Relations
Alpha Modus Holdings, Inc.
Email: ir@alphamodus.com
Website: www.alphamodus.com

Follow us on LinkedIn | Follow us on X

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/e06a61ec-fac1-4fea-b6b4-0da1c1d593f1.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Empowering, not replacing: A positive vision for AI in executive recruiting

Published

on


Image courtesy of Terri Davis

Tamara is a thought leader in Digital Journal’s Insight Forum (become a member).


“So, the biggest long‑term danger is that, once these artificial intelligences get smarter than we are, they will take control — they’ll make us irrelevant.” — Geoffrey Hinton, Godfather of AI

Modern AI often feels like a threat, especially when the warnings come from the very people building it. Sam Altman, the salesman behind ChatGPT (not an engineer, but the face of OpenAI and someone known for convincing investors), has said with offhand certainty, as casually as ordering toast or predicting the sun will rise, that entire categories of jobs will be taken over by AI. That includes roles in health, education, law, finance, and HR.

Some companies now won’t hire people unless AI fails at the given task, even though these models hallucinate, invent facts, and make critical errors. They’re replacing people with a tool we barely understand.

Even leaders in the field admit they don’t fully understand how AI works. In May 2025, Dario Amodei, CEO of Anthropic, said the quiet part out loud:

“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned. This lack of understanding is essentially unprecedented in the history of technology.”

In short, no one is fully in control of AI. A handful of Silicon Valley technocrats have appointed themselves arbiters of the direction of AI, and they work more or less in secret. There is no real government oversight. They are developing without any legal guardrails. And those guardrails may not arrive for years, by which time they may be too late to have any effect on what’s already been let out of Pandora’s Box. 

So we asked ourselves: Using the tools available to us today, why not model something right now that can in some way shape the discussion around how AI is used? In our case, this is in the HR space. 

What if AI didn’t replace people, but instead helped companies discover them?

Picture a CEO in a post-merger fog. She needs clarity, not another résumé pile. Why not introduce her to the precise leader she didn’t know she needed, using AI? 

Instead of turning warm-blooded professionals into collateral damage, why not use AI to help, thoughtfully, ethically, and practically solve problems that now exist across the board in HR, recruitment, and employment? 

An empathic role for AI

Most job platforms still rely on keyword-stuffed resumés and keyword matching algorithms. As a result, excellent candidates often get filtered out simply for using the “wrong” terms. That’s not just inefficient, it’s fundamentally malpractice. It’s hurting companies and candidates. It’s an example of technology poorly applied, but this is the norm today. 

Imagine instead a platform that isn’t keyword driven, that instead guides candidates through discovery to create richer, more dimensional profiles that showcase unique strengths, instincts, and character that shape real-world impact. This would go beyond skillsets or job titles to deeper personal qualities that differentiate equally experienced candidates, resulting in a better fitted leadership candidate to any given role.

One leader, as an example, may bring calm decisiveness in chaos. Another may excel at building unity across silos. Another might be relentless at rooting out operational bloat and uncovering savings others missed.

A system like this that helps uncover those traits, guides candidates to articulate them clearly, and discreetly learns about each candidate to offer thoughtful, evolving insights, would see AI used as an advocate, not a gatekeeping nemesis.

For companies, this application would reframe job descriptions around outcomes, not tasks. Instead of listing qualifications, the tool helps hiring teams articulate what they’re trying to achieve: whether it’s growth, turnaround, post-M&A integration, or cost efficiency, and then finds the most suitable candidate match. 

Fairness by design

Bias is endemic in HR today: ageism, sexism, disability, race. Imagine a platform that actively discourages bias. Gender, race, age, and even profile photos are optional. The system doesn’t reward those who include a photo, unlike most recruiting platforms. It doesn’t penalize those who don’t know how to game a résumé.

Success then becomes about alignment. Deep expertise. Purposeful outcomes.

This design gives companies what they want: competence. And gives candidates what they want: a fair chance.

This is more than an innovative way to use current AI technology. It’s a value statement about prioritizing people.

Why now

We’re at an inflection point.

Researchers like Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean forecast in AI 2027 that superhuman AI (AGI, then superintelligence) will bring changes in the next decade more disruptive than the Industrial Revolution.

If they’re even a little right, then the decisions being made today by a small circle in Silicon Valley will affect lives everywhere.

It’s important to step into the conversation now to help shape AI’s real-world role. The more human-centred, altruistic, practical uses of AI we build and model now, the more likely these values will help shape laws, norms, and infrastructure to come.

This is a historic moment. How we use AI now will shape the future. 

People-first design

Every technology revolution sparks fear. But this one with AI is unique. It’s the first since the Industrial Revolution where machines are being designed to replace people as an explicit goal. Entire roles and careers may vanish.

But that isn’t inevitable either. It’s a choice. 

AI can be built to assist, not erase. It can guide a leader to their next opportunity. It can help a CEO find a partner who unlocks transformation. It can put people out front, not overshadow them. 

We invite others in talent tech and AI to take a similar stance. Let’s build tools for people. Let’s avoid displacement and instead elevate talent. Let’s embed honesty, fairness, clarity, and alignment in everything we make. 

We don’t control the base models. But we do control how we use them. And how we build with them.

AI should amplify human potential, not replace it. That’s the choice I’m standing behind. 



Source link

Continue Reading

Tools & Platforms

ABA ethics opinion addresses jury selection discrimination from consultants and AI technology

Published

on


Ethics

ABA ethics opinion addresses jury selection discrimination from consultants and AI technology

When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility. (Image from Shutterstock)

When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility.

That also applies to client directives, as well as guidance from jury consultants or AI software, according to Formal Opinion 517, published Wednesday.

Such conduct violates Model Rule 8.4(g), which prohibits harassment and discrimination in the practice of law based on “race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status.”

A lawyer does not violate Rule 8.4(g) by exercising peremptory challenges on a discriminatory basis where not forbidden by other law, according to the opinion.

The U.S. Supreme Court explained such conduct violates the Equal Protection Clause of the 14th Amendment in Batson v. Kentucky (1986) and J.E.B. v. Alabama ex rel. T.B. (1994). In Batson, a lawyer struck a series of Black jurors in a criminal trial. In J.E.B., a lawyer struck a series of males in a paternity child support action.

The ethics opinion addresses when a Batson-type violation also constitutes professional misconduct under Rule 8.4(g).

Seemingly, if a lawyer commits such a violation, the lawyer also runs afoul of Rule 8.4(g). After all, in both settings the lawyer has engaged in a form of racial discrimination.

“Striking prospective jurors on discriminatory bases in violation of substantive law governing juror selection is not legitimate advocacy. Conduct that has been declared illegal by the courts or a legislature cannot constitute “legitimate advocacy,” the ethics opinion states.

However, Comment [5] to the model rule provides that a trial judge finding a Batson violation alone does not establish running afoul of 8.4.

The comment, according to the ethics opinion, gives “guidance on the evidentiary burden in a disciplinary proceeding.”

For example, in a disciplinary hearing a lawyer may be able to offer “a more fulsome explanation” for why they struck certain jurors. Furthermore, there is a “higher burden of proof” in lawyer discipline proceedings.

The ethics opinion also explains that a lawyer violates Rule 8.4(g) only if they know or reasonably should have known that the exercise of the peremptory challenges were unlawful. The lawyer may genuinely believe they had legitimate, nondiscriminatory reasons for striking certain jurors—such as their age, whether they paid attention during the jury selection process or something else.

According to the opinion, the question then centers on “whether ‘a lawyer of reasonable prudence and competence’ would have known that the challenges were impermissible.”

Also, the opinion addresses the difficult question of what if a client or jury consultant offers nondiscriminatory reasons for striking certain jurors and the lawyer follows such advice. Here, a reasonably competent and prudent lawyer should know whether the client or jury consultant’s reasons were pretextual or were legitimate.

Additionally, the opinion addresses a scenario where an AI-generated program ranks prospective jurors and applies those rankings, unknown to the lawyer, in a discriminatory manner. Lawyers should use “due diligence to acquire a general understanding of the methodology employed by the juror selection program,” the opinion states.

A July 9 ABA press release is here.





Source link

Continue Reading

Tools & Platforms

Big Tech, NYC teachers union join forces in new AI initiative that’s drawing concerns

Published

on


A new partnership between New York City’s teachers union and Big Tech companies has some educators wondering whether they’re at the forefront of improving instruction through artificial intelligence or welcoming a Trojan horse that threatens learning.

The American Federation of Teachers, the umbrella organization for the local United Federation of Teachers union, announced Tuesday it’s teaming up with Microsoft, OpenAI and Anthropic on a $23 million initiative to offer free AI training and software to AFT members. The investment, which is being covered by the companies, includes creating a new training space dubbed the “National Center for AI” on a floor of the UFT headquarters in Lower Manhattan.

UFT President Michael Mulgrew said at a press conference that some of his union’s educators started trainings this month, adding that the initiative will expand nationally over the next year. The initiative is aimed at K-12 teachers, is voluntary and focuses on tasks like lesson planning, according to the union and companies. AI can summarize texts and create worksheets and assessments.

“This tool could truly be a great gift to the children of this country and to education overall,” Mulgrew said. “But we’re not going to get there unless it’s driven by the people doing the work in the most important place in education, which is the classroom.”

Some teachers said they are skeptical about the initiative. Jia Lee, a special education teacher at the Earth School in the East Village, likened the arrangement to “letting the fox in the henhouse” and said she was “horrified” to see the union linking arms with the tech companies.

“I think a lot of educators would say we’re not anti-AI, we just have concerns about a lot of things that have not been explained or researched yet,” Lee said.

City education officials have sent mixed signals about integrating AI in classrooms. The local education department initially blocked OpenAI tool ChatGPT in schools in 2023, then lifted the ban. Schools spokesperson Nicole Brownstein said the agency is working on a “framework” for AI use, but declined to comment on the union’s new initiative.

Gerry Petrella, Microsoft’s general manager for U.S. policy, said the partnership would help the company figure out how to integrate AI into education “in a responsible and safe way.” He said he hoped AI tools would save teachers time so they could focus more on students and their individual needs.

National surveys show the technology is already creeping into students’ and teachers’ lives. A Harvard University survey last fall found half of high-school and college students use AI for some schoolwork, while a new Gallup poll found 60% of teachers reported using AI at some point over the past school year.

Annie Read Boyle, a fourth-grade teacher at P.S. 276 in Battery Park, said she hasn’t used AI much but is impressed with what she’s seen so far. Last year, she used a product called Diffit when she was teaching about the American Revolution.

“I said, ‘I want an article that’s fourth-grade level,’ and in 10 seconds [it] spit out this beautiful worksheet that would’ve taken me hours to create,” she said. “I was like, ‘Wow, this is really impressive and it just saved me so much time.’”

Boyle said she could imagine similar tools differentiating assignments based on students’ learning styles, abilities or language. Still, she cited concerns about data privacy, copyright infringement in materials and encouraging students to take shortcuts instead of developing critical-thinking skills.

“It’s such an important tool for teachers to know how to use so that we can teach the kids but it could really hurt the development process for kids,” she said, adding that she is also concerned about AI’s environmental impact and potential to drive job loss.

AFT President Randi Weingarten said Tuesday she hoped to learn from past mistakes involving technology, including social media’s harms on young people’s mental health. She said the union’s partnership with tech companies is a way to influence how AI is used with children.

“We can also make sure we have the guardrails we need to protect the safety and security of kids,” said Weingarten, whose union includes 1.8 million members nationwide. “That is now becoming our job. … We have to have a phone line back to [tech hub] Seattle.”



Source link

Continue Reading

Trending