AI Research
How a Million Dollar AI Company Grew from a Howard Student’s Drive and Mentor’s Vision
It all started with a chance encounter on Howard University’s campus. Just after finishing his undergraduate degree at the University of Virginia in 2018, DeMarcus Edwards was spending time at one of his favorite places to unwind when he struck up a spontaneous conversation with a faculty member.
“We were just generally talking, like this super nerdy conversation about adversarial machine learning,” Edwards said. The faculty member turned out to be Danda B. Rawat, Ph.D., associate dean for research and graduate studies and highly regarded as one of the best mentors at Howard University. Within two weeks, Edwards had joined Rawat’s lab as a master’s student.
That chance encounter launched a multiyear journey of mentorship, research, and professional growth. Edwards earned his master’s degree in computer science in 2020 and, with Rawat’s continued support, advanced into the Ph.D. program, which he completed in May 2024.
Along the way, the 30-year-old deepened his expertise in adversarial machine learning and completed industry residencies at Netflix, Apple, Meta, and Google X — experiences that sharpened his technical skills and exposed him to real-world AI challenges.
Today, Edwards is the co-founder of an Atlanta-based AI security startup, DARE Labs, that secured over $1 million in contracts last year. His journey reflects Howard University’s growing influence in tech innovation and its deepening ties to Silicon Valley. Through hands-on mentorship, cutting-edge research, and strategic industry partnerships, Howard is shaping the next generation of leaders in AI, cybersecurity, and machine learning. Edwards’ path shows how mentorship and research excellence are opening new frontiers for Black leadership in tech and entrepreneurship.
“People in the Valley have a lot of respect for Howard,” Edwards said. “I’d like that to be more well-known. There are tons of great computer scientists I know who came out of Howard.”
For both Edwards and Rawat, their partnership shows what’s possible when mentorship and advanced research come together—and when two people simply connect.
A self-described military brat, Edwards has family roots in Mount Vernon, Virginia, and a Howard connection through his grandmother, who worked as a nurse at the university hospital in the 1990s. Rawat is a professor of electrical engineering and computer science in the College of Engineering and Architecture and the founder and director of the U.S. Defense Department-sponsored Center of Excellence in Artificial Intelligence and Machine Learning, where he leads federally funded research on secure and trustworthy AI. Over the past decade, he has secured more than $110 million in funding from the Department of Defense and other agencies.
Rawat hopes Edwards will be seen as an example — a signature Howard student who might inspire others to pursue entrepreneurship. Rawat began mentoring Edwards during the master’s program and later supervised his Ph.D. research, which continued exploring the same focus: adversarial machine learning and robust AI security.
Rawat was recognized as one of three outstanding faculty mentors at Howard’s 2024 Research and Leadership Awards in April. In his approach, he draws a clear distinction between supervising a student’s doctoral work and the broader, more holistic responsibilities of mentorship.
“Supervision means guiding a student through their Ph.D. research, but mentorship goes beyond that,” Rawat said. “You mentor them through other things — like how to survive in the field, how to develop professionalism, how to apply for funding or write a thesis, and even how to establish a company. All those things extend beyond typical academic supervision.”
Rawat supported Edwards as his work increasingly focused on identifying and defending against attacks that manipulate artificial intelligence systems across different domains. Throughout his doctoral studies, Edwards contributed to several high-impact projects within Rawat’s Center of Excellence in Artificial Intelligence and Machine Learning.
Edwards says of Rawat, “Dr. Rawat’s always been the guy in my corner. Dr. Rawat was always there.”
Where Edwards really started to take off was when Rawat began connecting him with people in the field he was working in, giving him important industry experience. As a doctoral student, Edwards gained hands-on training through competitive residencies and internships at leading tech companies. In 2020, he was one of the first participants in the Netflix HBCU Tech Mentorship pilot program, which led to a residency where he worked on machine learning projects focused on content personalization. He later completed internships at Apple and Meta, contributing to innovations in action recognition for iPhones and video recommendation systems for Instagram Reels.
Federal research funding, collaborations with industry partners, and support from external advisory board members of the centers led by Rawat have played a key role in building strong connections between the university, high-tech companies, and government laboratories. These efforts reflect the kind of high-impact research and engagement that helped Howard University earn its prestigious Research One (R1) Carnegie Classification this year.
These experiences exposed Edwards to real-world challenges in artificial intelligence and shaped his desire to build tools with broader impact. His work eventually brought him to Google X, where he joined a team developing an AI-assisted exoskeleton. The team was later disbanded in a round of layoffs — an experience that sharpened his resolve to launch his own company alongside his best friend Branford Rogers.
When Edwards shared his plans, Rawat didn’t hesitate to back him.
“I told him that it was a great idea — explore it.’ And now he’s done that. Congratulations to him.” Rawat said.
Edwards, in building a company, said he realized he could fill a niche.
“I wanted to bring AI into products. The DOD, DOE, NIH — they need AI expertise. That’s how my company started.”
The Atlanta-based startup, with clients in San Francisco and D.C., helps government agencies turn unstructured documents — like PDFs and reports — into knowledge graphs for search and for training new machine learning models. Edwards likened the system to an org chart, explaining that it makes information easier to access and speeds up the search process.
“We take customer data and build knowledge graphs so it’s easy to use in AI applications,” Edwards said. “Our bet is that with everyone investing in AI, the real gain will come from how well you structure things. It’s like cleaning your room so you can find your phone.”
In its first year, Edwards’ startup brought in $1.2 million in contracts, working with major clients like the Department of Defense and Department of Energy. The company continues to grow, even amid shifting federal priorities and uncertain tech funding.
Looking back, Edwards still thinks about that first conversation with Rawat that set everything in motion.
“There’s an intellectual curiosity at Howard I haven’t found in many places,” he said. “People take the time to talk, even outside. It’s a special place. Sometimes, all you need is someone to believe in you and walk with you while you figure it out.”
###
AI Research
Artificial Intelligence in Cataract Surgery and Optometry at Large with Harvey Richman, OD, and Rebecca Wartman, OD
At the 2025 American Optometric Association Conference in Minneapolis, MN, Harvey Richman, OD, Shore Family Eyecare, and Rebecca Wartman, OD, optometrist chair of AOA Coding and Reimbursement Committee, presented their lecture on the implementation of artificial intelligence (AI) devices in cataract surgery and optometry at large.1
AI has been implemented in a variety of ophthalmology fields already, from analyzing and interpreting ocular imaging to determining the presence of diseases or disorders of the retina or macula. Recent studies have tested AI algorithms in analyzing fundus fluorescein angiography, finding the programs extremely effective at enhancing clinical efficiency.2
However, there are concerns as to the efficacy and reliability of AI programs, given their propensity for hallucination and misinterpretation. To that end, Drs. Richman and Wartman presented a study highlighting the present and future possibilities of AI in cataract surgery, extrapolating its usability to optometry as a whole.
Richman spoke to the importance of research in navigating the learning curve of AI technology. With the rapid advancements and breakneck pace of implementation, Richman points out the relative ease with which an individual can fall behind on the latest developments and technologies available to them.
“The problem is that the technology is advancing much quicker than the people are able to adapt to it,” Richman told HCPLive. “There’s been research done on AI for years and years; unfortunately, the implementation just hasn’t been as effective.”
Wartman warned against the potential for AI to take too much control in a clinical setting. She cautioned that clinicians should be wary of letting algorithms make all of the treatment decisions, as well as having a method of undoing those decisions.
“I think they need to be very well aware of what algorithms the AI is using to get to its interpretations and be a little cautious when the AI does all of the decision making,” Wartman said. “Make sure you know how to override that decision making.”
Richman went on to discuss the 3 major levels of AI: assistive technology, augmented technology, and autonomous intelligence.
“Some of those are just bringing out data, some of them bring data and make recommendations for treatment protocol, and the third one can actually make the diagnosis and treatment protocol and implement it without a physician even involved,” Richman said. “In fact, the first artificial intelligence code that was approved by CPT had to do with diabetic retina screening, and it is autonomous. There is no physician work involved in that.”
Wartman also informed HCPLive that a significant amount of surgical technology is already using artificial intelligence, mainly in the form of pattern recognition software and predictive devices.
“A lot of our equipment is already using some form of artificial intelligence, or at least algorithms to give you patterns and tell you whether it’s inside or outside the norm,” Wartman said.
References
-
Richman H, Wartman R. A.I. in Cataract Surgery. Presented at the 2025 American Optometric Association in Minneapolis, MN, June 25-28, 2025.
-
Shao A, Liu X, Shen W, et al. Generative artificial intelligence for fundus fluorescein angiography interpretation and human expert evaluation. NPJ Digit Med. 2025;8(1):396. Published 2025 Jul 2. doi:10.1038/s41746-025-01759-z
AI Research
Northumbria to roll out new AI platform for staff and students
Northumbria University is to provide its students and staff with access to Claude for Education – a leading AI platform specifically tailored for higher education.
Northumbria will become only the second university in the UK, alongside the London School of Economics and other leading international institutions, to offer Claude for Education as a tool to its university community.
With artificial intelligence rapidly transforming many aspects of our lives, Northumbria’s students and staff will now be provided with free access to many of the tools and skills they will need to succeed in the new global AI-environment.
Claude for Education is a next-generation AI assistant built by Anthropic and trained to be safe, accurate and secure. It provides universities with ethical and transparent access to AI that ensures data security and copyright compliance and acts as a 24/7 study partner for students, designed to guide learning and develop critical thinking rather than providing direct answers.
Known as a UK leader in responsible AI-based research and education, Northumbria University recently launched its Centre for Responsible AI and is leading a multi-million pound UKRI AI Centre for Doctoral Training in Citizen-Centred Artificial Intelligence to train the next generation of leaders in AI development.
Professor Graham Wynn explained: “Today’s students are digitally native and recent data show many use AI routinely. They expect their universities to provide a modern, technology-enhanced education, providing access to AI tools along with clear guidance on the responsible use of AI.
“We know that the availability of secure and ethical AI tools is a significant consideration for our applicants and our investment in Claude for Education will position Northumbria as a forward-thinking leader in ethical AI innovation.
“Empowering students and staff, providing cutting-edge learning opportunities, driving social mobility and powering an inclusive economy are at the heart of everything we do. We know how important it is to eliminate digital poverty and provide equitable access to the most powerful AI tools, so our students and graduates are AI literate with the skills they need for the workplaces of the future.
“The introduction of Claude for Education will provide our students and staff with free universal access to cutting-edge AI technology, regardless of their financial circumstances.”
The University is now working with Anthropic to establish the technical infrastructure and training to roll out Claude for Education in autumn 2025.
AI Research
Wiley Partners with Anthropic to accelerate responsible AI integration
Wiley has announced plans for a strategic partnership with Anthropic, an artificial intelligence research and development company with an emphasis on responsible AI.
Wiley is adopting the Model Context Protocol (MCP), an open standard created by Anthropic, which aims to enable seamless integration between authoritative, peer-reviewed content and AI tools across multiple platforms. Beginning with a pilot program, and subject to definitive agreement, Wiley and Anthropic will work to ensure university partners have streamlined, enhanced access to their Wiley research content.
Another key focus of the partnership is to establish standards for how AI tools properly integrate scientific journal content into results while providing appropriate context for users, including author attribution and citations.
“The future of research lies in ensuring that high-quality, peer-reviewed content remains central to AI-powered discovery,” said Josh Jarrett, Senior Vice President of AI Growth at Wiley. “Through this partnership, Wiley is not only setting the standard for how academic publishers integrate trusted scientific content with AI platforms but is also creating a scalable solution that other institutions and publishers can adopt. By adopting MCP, we’re demonstrating our commitment to interoperability and helping to ensure authoritative, peer-reviewed research will be discoverable in an increasingly AI-driven landscape.”
The announcement coincides with Anthropic’s broader Claude for Education initiative, which highlights new partnerships and tools designed to amplify teaching, learning, administration and research in higher education.
“We’re excited to partner with Wiley to explore how AI can accelerate and enhance access to scientific research,” said Lauren Collett, who leads Higher Education partnerships at Anthropic. “This collaboration demonstrates our commitment to building AI that amplifies human thinking—enabling students to access peer-reviewed content with Claude, enhancing learning and discovery while maintaining proper citation standards and academic integrity.”
Do you want to read more content like this? SUBSCRIBE to the Research Information Newsline!
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Education2 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children