Image courtesy of Terri Davis
Tamara is a thought leader in Digital Journal’s Insight Forum (become a member).
“So, the biggest long‑term danger is that, once these artificial intelligences get smarter than we are, they will take control — they’ll make us irrelevant.” — Geoffrey Hinton, Godfather of AI
Modern AI often feels like a threat, especially when the warnings come from the very people building it. Sam Altman, the salesman behind ChatGPT (not an engineer, but the face of OpenAI and someone known for convincing investors), has said with offhand certainty, as casually as ordering toast or predicting the sun will rise, that entire categories of jobs will be taken over by AI. That includes roles in health, education, law, finance, and HR.
Some companies now won’t hire people unless AI fails at the given task, even though these models hallucinate, invent facts, and make critical errors. They’re replacing people with a tool we barely understand.
Even leaders in the field admit they don’t fully understand how AI works. In May 2025, Dario Amodei, CEO of Anthropic, said the quiet part out loud:
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned. This lack of understanding is essentially unprecedented in the history of technology.”
In short, no one is fully in control of AI. A handful of Silicon Valley technocrats have appointed themselves arbiters of the direction of AI, and they work more or less in secret. There is no real government oversight. They are developing without any legal guardrails. And those guardrails may not arrive for years, by which time they may be too late to have any effect on what’s already been let out of Pandora’s Box.
So we asked ourselves: Using the tools available to us today, why not model something right now that can in some way shape the discussion around how AI is used? In our case, this is in the HR space.
What if AI didn’t replace people, but instead helped companies discover them?
Picture a CEO in a post-merger fog. She needs clarity, not another résumé pile. Why not introduce her to the precise leader she didn’t know she needed, using AI?
Instead of turning warm-blooded professionals into collateral damage, why not use AI to help, thoughtfully, ethically, and practically solve problems that now exist across the board in HR, recruitment, and employment?
An empathic role for AI
Most job platforms still rely on keyword-stuffed resumés and keyword matching algorithms. As a result, excellent candidates often get filtered out simply for using the “wrong” terms. That’s not just inefficient, it’s fundamentally malpractice. It’s hurting companies and candidates. It’s an example of technology poorly applied, but this is the norm today.
Imagine instead a platform that isn’t keyword driven, that instead guides candidates through discovery to create richer, more dimensional profiles that showcase unique strengths, instincts, and character that shape real-world impact. This would go beyond skillsets or job titles to deeper personal qualities that differentiate equally experienced candidates, resulting in a better fitted leadership candidate to any given role.
One leader, as an example, may bring calm decisiveness in chaos. Another may excel at building unity across silos. Another might be relentless at rooting out operational bloat and uncovering savings others missed.
A system like this that helps uncover those traits, guides candidates to articulate them clearly, and discreetly learns about each candidate to offer thoughtful, evolving insights, would see AI used as an advocate, not a gatekeeping nemesis.
For companies, this application would reframe job descriptions around outcomes, not tasks. Instead of listing qualifications, the tool helps hiring teams articulate what they’re trying to achieve: whether it’s growth, turnaround, post-M&A integration, or cost efficiency, and then finds the most suitable candidate match.
Fairness by design
Bias is endemic in HR today: ageism, sexism, disability, race. Imagine a platform that actively discourages bias. Gender, race, age, and even profile photos are optional. The system doesn’t reward those who include a photo, unlike most recruiting platforms. It doesn’t penalize those who don’t know how to game a résumé.
Success then becomes about alignment. Deep expertise. Purposeful outcomes.
This design gives companies what they want: competence. And gives candidates what they want: a fair chance.
This is more than an innovative way to use current AI technology. It’s a value statement about prioritizing people.
Why now
We’re at an inflection point.
Researchers like Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean forecast in AI 2027 that superhuman AI (AGI, then superintelligence) will bring changes in the next decade more disruptive than the Industrial Revolution.
If they’re even a little right, then the decisions being made today by a small circle in Silicon Valley will affect lives everywhere.
It’s important to step into the conversation now to help shape AI’s real-world role. The more human-centred, altruistic, practical uses of AI we build and model now, the more likely these values will help shape laws, norms, and infrastructure to come.
This is a historic moment. How we use AI now will shape the future.
People-first design
Every technology revolution sparks fear. But this one with AI is unique. It’s the first since the Industrial Revolution where machines are being designed to replace people as an explicit goal. Entire roles and careers may vanish.
But that isn’t inevitable either. It’s a choice.
AI can be built to assist, not erase. It can guide a leader to their next opportunity. It can help a CEO find a partner who unlocks transformation. It can put people out front, not overshadow them.
We invite others in talent tech and AI to take a similar stance. Let’s build tools for people. Let’s avoid displacement and instead elevate talent. Let’s embed honesty, fairness, clarity, and alignment in everything we make.
We don’t control the base models. But we do control how we use them. And how we build with them.
AI should amplify human potential, not replace it. That’s the choice I’m standing behind.