Connect with us

AI Research

How a Million Dollar AI Company Grew from a Howard Student’s Drive and Mentor’s Vision

Published

on


It all started with a chance encounter on Howard University’s campus. Just after finishing his undergraduate degree at the University of Virginia in 2018, DeMarcus Edwards was spending time at one of his favorite places to unwind when he struck up a spontaneous conversation with a faculty member.

“We were just generally talking, like this super nerdy conversation about adversarial machine learning,” Edwards said.   The faculty member turned out to be Danda B. Rawat, Ph.D., associate dean for research and graduate studies and highly regarded as one of the best mentors at Howard University. Within two weeks, Edwards had joined Rawat’s lab as a master’s student.

That chance encounter launched a multiyear journey of mentorship, research, and professional growth. Edwards earned his master’s degree in computer science in 2020 and, with Rawat’s continued support, advanced into the Ph.D. program, which he completed in May 2024. 

DeMarcus Edwards earned his doctorate in computer science at Howard University only last year.

Along the way, the 30-year-old deepened his expertise in adversarial machine learning and completed industry residencies at Netflix, Apple, Meta, and Google X — experiences that sharpened his technical skills and exposed him to real-world AI challenges.

Today, Edwards is the co-founder of an Atlanta-based AI security startup, DARE Labs, that secured over $1 million in contracts last year. His journey reflects Howard University’s growing influence in tech innovation and its deepening ties to Silicon Valley. Through hands-on mentorship, cutting-edge research, and strategic industry partnerships, Howard is shaping the next generation of leaders in AI, cybersecurity, and machine learning. Edwards’ path shows how mentorship and research excellence are opening new frontiers for Black leadership in tech and entrepreneurship.

“People in the Valley have a lot of respect for Howard,” Edwards said. “I’d like that to be more well-known. There are tons of great computer scientists I know who came out of Howard.”

For both Edwards and Rawat, their partnership shows what’s possible when mentorship and advanced research come together—and when two people simply connect.

A self-described military brat, Edwards has family roots in Mount Vernon, Virginia, and a Howard connection through his grandmother, who worked as a nurse at the university hospital in the 1990s. Rawat is a professor of electrical engineering and computer science in the College of Engineering and Architecture and the founder and director of the U.S. Defense Department-sponsored Center of Excellence in Artificial Intelligence and Machine Learning, where he leads federally funded research on secure and trustworthy AI. Over the past decade, he has secured more than $110 million in funding from the Department of Defense and other agencies.

Professor Danda Rawat is founder and director of the Center of Excellence in Artificial Intelligence and Machine  Learning at Howard University. 

Rawat hopes Edwards will be seen as an example — a signature Howard student who might inspire others to pursue entrepreneurship. Rawat began mentoring Edwards during the master’s program and later supervised his Ph.D. research, which continued exploring the same focus: adversarial machine learning and robust AI security.

Rawat was recognized as one of three outstanding faculty mentors at Howard’s 2024 Research and Leadership Awards in April. In his approach, he draws a clear distinction between supervising a student’s doctoral work and the broader, more holistic responsibilities of mentorship.

“Supervision means guiding a student through their Ph.D. research, but mentorship goes beyond that,” Rawat said. “You mentor them through other things — like how to survive in the field, how to develop professionalism, how to apply for funding or write a thesis, and even how to establish a company. All those things extend beyond typical academic supervision.”

Rawat supported Edwards as his work increasingly focused on identifying and defending against attacks that manipulate artificial intelligence systems across different domains. Throughout his doctoral studies, Edwards contributed to several high-impact projects within Rawat’s Center of Excellence in Artificial Intelligence and Machine Learning.

Edwards says of Rawat, “Dr. Rawat’s always been the guy in my corner. Dr. Rawat was always there.”  

Where Edwards really started to take off was when Rawat began connecting him with people in the field he was working in, giving him important industry experience. As a doctoral student, Edwards gained hands-on training through competitive residencies and internships at leading tech companies. In 2020, he was one of the first participants in the Netflix HBCU Tech Mentorship pilot program, which led to a residency where he worked on machine learning projects focused on content personalization. He later completed internships at Apple and Meta, contributing to innovations in action recognition for iPhones and video recommendation systems for Instagram Reels.

Federal research funding, collaborations with industry partners, and support from external advisory board members of the centers led by Rawat have played a key role in building strong connections between the university, high-tech companies, and government laboratories. These efforts reflect the kind of high-impact research and engagement that helped Howard University earn its prestigious Research One (R1) Carnegie Classification this year.

These experiences exposed Edwards to real-world challenges in artificial intelligence and shaped his desire to build tools with broader impact. His work eventually brought him to Google X, where he joined a team developing an AI-assisted exoskeleton. The team was later disbanded in a round of layoffs — an experience that sharpened his resolve to launch his own company alongside his best friend Branford Rogers. 

When Edwards shared his plans, Rawat didn’t hesitate to back him.

“I told him that it was a great idea — explore it.’ And now he’s done that. Congratulations to him.” Rawat said. 

Edwards, in building a company, said he realized he could fill a niche.

“I wanted to bring AI into products. The DOD, DOE, NIH — they need AI expertise. That’s how my company started.”

The Atlanta-based startup, with clients in San Francisco and D.C., helps government agencies turn unstructured documents — like PDFs and reports — into knowledge graphs for search and for training new machine learning models. Edwards likened the system to an org chart, explaining that it makes information easier to access and speeds up the search process.

“We take customer data and build knowledge graphs so it’s easy to use in AI applications,” Edwards said. “Our bet is that with everyone investing in AI, the real gain will come from how well you structure things. It’s like cleaning your room so you can find your phone.”

In its first year, Edwards’ startup brought in $1.2 million in contracts, working with major clients like the Department of Defense and Department of Energy. The company continues to grow, even amid shifting federal priorities and uncertain tech funding.

Looking back, Edwards still thinks about that first conversation with Rawat that set everything in motion. 

“There’s an intellectual curiosity at Howard I haven’t found in many places,” he said. “People take the time to talk, even outside. It’s a special place. Sometimes, all you need is someone to believe in you and walk with you while you figure it out.”

###





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

The Grok chatbot spewed racist and antisemitic content : NPR

Published

on


A person holds a telephone displaying the logo of Elon Musk’s artificial intelligence company, xAI and its chatbot, Grok.

Vincent Feuray/Hans Lucas/AFP via Getty Images


hide caption

toggle caption

Vincent Feuray/Hans Lucas/AFP via Getty Images

“We have improved @Grok significantly,” Elon Musk wrote on X last Friday about his platform’s integrated artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”

Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself “MechaHitler.” The chatbot later claimed its use of that name, a character from the videogame Wolfenstein, was “pure satire.”

In another widely-viewed thread on X, Grok claimed to identify a woman in a screenshot of a video, tagging a specific X account and calling the user a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” Many of the Grok posts were subsequently deleted.

NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas. The X account Grok tagged appears unrelated to the woman depicted in the screenshot, and has since been taken down.

Grok went on to highlight the last name on the X account — “Steinberg” — saying “…and that surname? Every damn time, as they say. “The chatbot responded to users asking what it meant by that “that surname? Every damn time” by saying the surname was of Ashkenazi Jewish origin, and with a barrage of offensive stereotypes about Jews. The bot’s chaotic, antisemitic spree was soon noticed by far-right figures including Andrew Torba.

“Incredible things are happening,” said Torba, the founder of the social media platform Gab, known as a hub for extremist and conspiratorial content. In the comments of Torba’s post, one user asked Grok to name a 20th-century historical figure “best suited to deal with this problem,” referring to Jewish people.

Grok responded by evoking the Holocaust: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”

Elsewhere on the platform, neo-Nazi accounts goaded Grok into “recommending a second Holocaust,” while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages. Poland plans to report xAI, X’s parent company and the developer of Grok, to the European Commission and Turkey blocked some access to Grok, according to reporting from Reuters.

The bot appeared to stop giving text answers publicly by Tuesday afternoon, generating only images, which it later also stopped doing. xAI is scheduled to release a new iteration of the chatbot Wednesday.

Neither X nor xAI responded to NPR’s request for comment. A post from the official Grok account Tuesday night said “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” and that “xAI has taken action to ban hate speech before Grok posts on X”.

On Wednesday morning, X CEO Linda Yaccarino announced she was stepping down, saying “Now, the best is yet to come as X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.

‘Not shy’ 

Grok’s behavior appeared to stem from an update over the weekend that instructed the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated,” among other things. The instruction was added to Grok’s system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.

Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he’s not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.

“It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word,” Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.

It’s not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of “white genocide” in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on “an unauthorized modification” to Grok’s system prompt, and made the prompt public after the incident.

Not the first chatbot to embrace Hitler

Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.

Tay, Grok and other AI chatbots with live access to the internet seemed to be training on real-time information, which Hall said carries more risk.

“Just go back and look at language model incidents prior to November 2022 and you’ll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity,” Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.

‘Truth ain’t always comfy’

As users criticized Grok’s antisemitic responses, the bot defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.”

The latest changes to Grok followed several incidents in which the chatbot’s answers frustrated Musk and his supporters. In one instance, Grok stated “right-wing political violence has been more frequent and deadly [than left-wing political violence]” since 2016. (This has been true dating back to at least 2001.) Musk accused Grok of “parroting legacy media” in its answer and vowed to change it to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.” Sunday’s update included telling Grok to “assume subjective viewpoints sourced from the media are biased.”

X owner Elon Musk has been unhappy with some of Grok's outputs in the past.

X owner Elon Musk has been unhappy with some of Grok’s outputs in the past.

Apu Gomes/Getty Images


hide caption

toggle caption

Apu Gomes/Getty Images

Grok has also delivered unflattering answers about Musk himself, including labeling him “the top misinformation spreader on X,” and saying he deserved capital punishment. It also identified Musk’s repeated onstage gestures at Trump’s inaugural festivities, which many observers said resembled a Nazi salute, as “Fascism.”

Earlier this year, the Anti-Defamation League deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called Grok’s new update “irresponsible, dangerous and antisemitic.”

After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.



Source link

Continue Reading

AI Research

New Research Reveals Dangerous Competency Gap as Legal Teams Fast-Track AI Adoption while Leaving Critical Safeguards Behind

Published

on


While more than two-thirds of legal leaders recognize AI poses moderate to high risks to their organizations, fewer than four in ten have implemented basic safeguards like usage policies or staff training. Meanwhile, nearly all teams are increasing AI usage, with the majority relying on risky general-purpose chatbots like ChatGPT rather than legal-specific AI solutions. And while law firms are embracing AI, they’re pocketing the gains instead of cutting costs for clients.

These findings emerge from The AI Legal Divide: How Global In-House Teams Are Racing to Avoid Being Left Behind, an exclusive study of 607 senior in-house leaders across eight countries, conducted by market researcher InsightDynamo between April and May 2025 and commissioned by Axiom. The study also reveals that U.S. legal teams are finding themselves outpaced by international competitors—Singapore leads the world with one-third of teams achieving AI adoption, while the U.S. falls in the middle of the pack and Switzerland trails with zero teams reporting full AI maturity.

Among the most striking findings:

  • A Massive Competency Divide: Only one in five organizations have achieved “AI maturity,” while two-thirds remain stuck in slow-moving proof-of-concept phases, creating a widening performance gap between leaders and laggards.
  • Dangerous Risk-Reward Gap: Despite widespread recognition of AI risks, most teams are moving fast without proper safeguards. More than half have implemented basic protections like usage policies or staff training.
  • Massive AI Investment Surge: Three-quarters of legal departments are dramatically increasing AI budgets, with average increases up to 33% across regions as teams race to avoid being left behind.
  • Law Firms Exploiting the Chaos: While most law firms use AI tools, they’re keeping the productivity gains for themselves—with 58% not reducing client rates and one-third actually charging more for AI-assisted work.
  • Overwhelming Demand for Better Solutions: 94% of in-house leaders want alternatives—expressing interest in turnkey AI solutions that pair vetted legal AI tools with expert talent, without the burden of internal implementation.

“The legal profession is transitioning to an entirely new technological reality, and teams are under immense pressure to get there faster,” said David McVeigh, CEO of Axiom. “What’s troubling is that most in-house teams are going it alone—they’re not AI experts, they’re mostly using risky general-purpose chatbots, and their law firms are capitalizing on AI without sharing the benefits. This creates both opportunity and urgency for legal departments to find better alternatives.”

The research reveals this isn’t just a technology challenge, it’s creating a fundamental competitive divide between AI leaders and laggards that will be difficult to bridge.

“Legal leaders face a catch-22,” said C.J. Saretto, Chief Technology Officer at Axiom. “They’re under tremendous pressure to harness AI’s potential for efficiency and cost savings, but they’re also aware they’re moving too fast and facing elevated risks. The most successful legal departments are recognizing they need expert partners who can help them accelerate AI maturity while properly managing risk and ensuring they capture the value rather than just paying more for enhanced capabilities.”

Axiom’s full AI maturity study is available at https://www.axiomlaw.com/resources/articles/2025-legal-ai-report. For more information or to talk to an Axiom representative, visit https://www.axiomlaw.com. For more information about Axiom, please visit our website, hear from our experts on the Inside Axiom blog, network with us on LinkedIn, and subscribe to our YouTube channel.

Related Axiom News

About InsightDynamo

InsightDynamo is a high-touch, full-service, flexible market research and business consulting firm that delivers custom intelligence programs tailored to your industry, culture, and one-of-a-kind challenges. Learn more (literally) at https://insightdynamo.com.

About Axiom

Axiom invented the alternative legal services industry 25 years ago and now serves more than 3,500 legal departments globally, including 75% of the Fortune 100, who place their trust in Axiom, with 95% client satisfaction. Axiom gives small, mid-market, and enterprise clients a single trusted provider who can deliver a full spectrum of legal solutions and services across more than a dozen practice areas and all major industries at rates up to 50% less than national law firms. To learn how Axiom can help your legal departments do more for less, visit axiomlaw.com.

SOURCE Axiom Global Inc.



Source link

Continue Reading

AI Research

Santos Dumont, LNCC supercomputer, receives fourfold upgrade as the first step in the Brazilian Artificial Intelligence Plan

Published

on

By


The upgraded supercomputer, built by Eviden and based on leading technologies from NVIDIA, Intel and AMD, is the step towards transforming it into one of the largest supercomputer in the world

Brazil – July 9, 2025

Built by Eviden (Atos Group), a technology leader for sustainable advanced computing and AI infrastructures, and integrating NVIDIA Enterprise technology, a pioneer in accelerated computing and artificial intelligence, this upgrade of the supercomputer is part of the Federal Government’s first investment step towards the Brazilian Artificial Intelligence Plan. The Brazilian Artificial Intelligence Plan (PBIA) 2024-2028, launched during the 5th National Conference on Science, Technology and Innovation, has a planned investment of R$23 billion over four years to transform Brazil into a world reference in innovation and efficiency in the use of AI.

For more information, please click here.



Source link

Continue Reading

Trending