Tools & Platforms
Empowering, not replacing: A positive vision for AI in executive recruiting

Image courtesy of Terri Davis
Tamara is a thought leader in Digital Journal’s Insight Forum (become a member).
“So, the biggest long‑term danger is that, once these artificial intelligences get smarter than we are, they will take control — they’ll make us irrelevant.” — Geoffrey Hinton, Godfather of AI
Modern AI often feels like a threat, especially when the warnings come from the very people building it. Sam Altman, the salesman behind ChatGPT (not an engineer, but the face of OpenAI and someone known for convincing investors), has said with offhand certainty, as casually as ordering toast or predicting the sun will rise, that entire categories of jobs will be taken over by AI. That includes roles in health, education, law, finance, and HR.
Some companies now won’t hire people unless AI fails at the given task, even though these models hallucinate, invent facts, and make critical errors. They’re replacing people with a tool we barely understand.
Even leaders in the field admit they don’t fully understand how AI works. In May 2025, Dario Amodei, CEO of Anthropic, said the quiet part out loud:
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned. This lack of understanding is essentially unprecedented in the history of technology.”
In short, no one is fully in control of AI. A handful of Silicon Valley technocrats have appointed themselves arbiters of the direction of AI, and they work more or less in secret. There is no real government oversight. They are developing without any legal guardrails. And those guardrails may not arrive for years, by which time they may be too late to have any effect on what’s already been let out of Pandora’s Box.
So we asked ourselves: Using the tools available to us today, why not model something right now that can in some way shape the discussion around how AI is used? In our case, this is in the HR space.
What if AI didn’t replace people, but instead helped companies discover them?
Picture a CEO in a post-merger fog. She needs clarity, not another résumé pile. Why not introduce her to the precise leader she didn’t know she needed, using AI?
Instead of turning warm-blooded professionals into collateral damage, why not use AI to help, thoughtfully, ethically, and practically solve problems that now exist across the board in HR, recruitment, and employment?
An empathic role for AI
Most job platforms still rely on keyword-stuffed resumés and keyword matching algorithms. As a result, excellent candidates often get filtered out simply for using the “wrong” terms. That’s not just inefficient, it’s fundamentally malpractice. It’s hurting companies and candidates. It’s an example of technology poorly applied, but this is the norm today.
Imagine instead a platform that isn’t keyword driven, that instead guides candidates through discovery to create richer, more dimensional profiles that showcase unique strengths, instincts, and character that shape real-world impact. This would go beyond skillsets or job titles to deeper personal qualities that differentiate equally experienced candidates, resulting in a better fitted leadership candidate to any given role.
One leader, as an example, may bring calm decisiveness in chaos. Another may excel at building unity across silos. Another might be relentless at rooting out operational bloat and uncovering savings others missed.
A system like this that helps uncover those traits, guides candidates to articulate them clearly, and discreetly learns about each candidate to offer thoughtful, evolving insights, would see AI used as an advocate, not a gatekeeping nemesis.
For companies, this application would reframe job descriptions around outcomes, not tasks. Instead of listing qualifications, the tool helps hiring teams articulate what they’re trying to achieve: whether it’s growth, turnaround, post-M&A integration, or cost efficiency, and then finds the most suitable candidate match.
Fairness by design
Bias is endemic in HR today: ageism, sexism, disability, race. Imagine a platform that actively discourages bias. Gender, race, age, and even profile photos are optional. The system doesn’t reward those who include a photo, unlike most recruiting platforms. It doesn’t penalize those who don’t know how to game a résumé.
Success then becomes about alignment. Deep expertise. Purposeful outcomes.
This design gives companies what they want: competence. And gives candidates what they want: a fair chance.
This is more than an innovative way to use current AI technology. It’s a value statement about prioritizing people.
Why now
We’re at an inflection point.
Researchers like Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean forecast in AI 2027 that superhuman AI (AGI, then superintelligence) will bring changes in the next decade more disruptive than the Industrial Revolution.
If they’re even a little right, then the decisions being made today by a small circle in Silicon Valley will affect lives everywhere.
It’s important to step into the conversation now to help shape AI’s real-world role. The more human-centred, altruistic, practical uses of AI we build and model now, the more likely these values will help shape laws, norms, and infrastructure to come.
This is a historic moment. How we use AI now will shape the future.
People-first design
Every technology revolution sparks fear. But this one with AI is unique. It’s the first since the Industrial Revolution where machines are being designed to replace people as an explicit goal. Entire roles and careers may vanish.
But that isn’t inevitable either. It’s a choice.
AI can be built to assist, not erase. It can guide a leader to their next opportunity. It can help a CEO find a partner who unlocks transformation. It can put people out front, not overshadow them.
We invite others in talent tech and AI to take a similar stance. Let’s build tools for people. Let’s avoid displacement and instead elevate talent. Let’s embed honesty, fairness, clarity, and alignment in everything we make.
We don’t control the base models. But we do control how we use them. And how we build with them.
AI should amplify human potential, not replace it. That’s the choice I’m standing behind.
Tools & Platforms
Scale AI is suing a former employee and rival Mercor, alleging they tried to steal its biggest customers

Scale AI, which helps tech companies prepare data to train their AI models, filed a lawsuit against one of its former sales employees and its rival Mercor on Wednesday. The suit claims the employee, who was hired by Mercor, “stole more than 100 confidential documents concerning Scale’s customer strategies and other proprietary information,” according to a copy seen by TechCrunch.
Scale is suing Mercor for misappropriation of trade secrets and is suing the former employee, Eugene Ling, for breach of contract. The suit also claims the employee was trying to pitch Mercor to one of Scale’s largest customers before he officially left his former job. The suit calls this company “Customer A.”
Mercor co-founder Surya Midha denies that his company used any data from Scale, although he admits that Ling may have been in possession of some.
“While Mercor has hired many people who departed Scale, we have no interest in any of Scale’s trade secrets and in fact are intentionally running our business in a different way. Eugene informed us that he had old documents in a personal Google Drive, which we have never accessed and are now investigating,” Midha told TechCrunch in an emailed statement.
“We reached out to Scale six days ago offering to have Eugene destroy the files or reach a different resolution, and we are now awaiting their response,” Midha said.
Scale alleges that these documents contained the specific data that would allow Mercor to serve Customer A, as well as several other of Scale’s most important clients.
Scale wanted Mercor to give it a full list of the files in the drive, and to prevent Ling from working with Customer A. It alleges in the suit that Mercor refused. Ling did not immediately respond to TechCrunch’s request for comment.
Techcrunch event
San Francisco
|
October 27-29, 2025
There are scant clues in the suit about the identity of Customer A. The suit does say that if Scale’s rival did win this customer away, it would be a contract “worth millions of dollars to Mercor.”
Whatever the details of this suit, it does show one thing: Scale is clearly concerned enough about the threat of Mercor to pursue legal action. As TechCrunch previously reported, even with Meta’s multibillion-dollar investment into Scale, TBD Labs — the core unit within Meta tasked with building AI superintelligence — is still using Mercor and other LLM data training service providers.
Mercor is rising in the LLM training arena because it is known for hiring content specialists, often PhDs, to train LLM data in their areas of expertise.
In June, Scale announced that Meta was investing $14.3 billion for a 49% stake in Scale and was hiring away its founder. Shortly after that, several of Scale AI’s largest data customers, who are competitors to Meta’s efforts, reportedly cut ties with it.
Tools & Platforms
CoreWeave Merges AI Cloud with Self-Learning Tech for Smarter Systems

CoreWeave, Inc. (NASDAQ: CRWV) has announced the acquisition of OpenPipe Inc., a leader in reinforcement learning (RL) platforms for training AI agents, marking a strategic move to strengthen its AI cloud capabilities. OpenPipe’s technology is designed to enable developers to train agents using advanced machine learning techniques, allowing the agents to learn from experience and improve over time in accuracy, performance, and reliability. The platform includes Agent Reinforcement Trainer (ART), one of the most widely used open-source RL toolkits for training agents [5].
Brian Venturo, Co-founder and Chief Strategy Officer at CoreWeave, highlighted the importance of reinforcement learning in enhancing model performance for agentic and reasoning tasks. The acquisition integrates OpenPipe’s self-learning tools with CoreWeave’s high-performance AI cloud, creating a more comprehensive platform for developers to build scalable intelligent systems. Kyle Corbitt, Co-founder and CEO of OpenPipe, added that the partnership with CoreWeave enables the expansion of their vision to accelerate the development of reliable, high-performing, and cost-effective AI systems [5].
The acquisition builds upon CoreWeave’s recent acquisition of Weights & Biases, a move that aligns with the company’s strategy to deepen its vertical integration across its technology stack. By incorporating new reinforcement learning and fine-tuning capabilities, CoreWeave is offering customers greater flexibility to train, adapt, and optimize their AI models. This expansion also supports AI labs and enterprises in solving complex problems autonomously, as reported by industry experts in the field [5].
CoreWeave’s AI cloud platform is purpose-built for the scale, performance, and expertise required to power AI innovation. The company operates a growing network of data centers across the U.S. and Europe, and it has been recognized as one of the TIME100 most influential companies and featured on Forbes Cloud 100 in 2024. The acquisition of OpenPipe further positions CoreWeave to meet the growing demand for AI infrastructure, particularly in the realm of autonomous decision-making and learning systems [5].
In addition to this strategic acquisition, CoreWeave recently participated in the Goldman Sachs Communacopia + Technology Conference, where CEO Michael Intrator and Chief Development Officer Brannin McBee provided insights into the company’s vision and roadmap for the future [6]. This engagement with major financial institutions underscores CoreWeave’s ongoing efforts to enhance transparency and communication with its investors and the broader market.
The market has seen mixed reactions to CoreWeave’s recent developments. While the stock initially saw gains following NVIDIA’s strong AI GPU earnings, recent insider selling by top executives and major shareholders like Magnetar Financial triggered a decline in its share price [2]. Despite these short-term fluctuations, CoreWeave remains a key player in the AI infrastructure sector, with strategic partnerships such as the recent expansion with Applied Digital (APLD) further solidifying its position in the market [4].
Source:
[1] CoreWeave, Inc. (CRWV) Is One Of The Biggest Beneficiaries Of NVIDIA’s Booming AI GPU Demand, Says Jim Cramer (https://finance.yahoo.com/news/coreweave-inc-crwv-one-biggest-192935144.html)
[2] CoreWeaves’ Stock Slides as Insider Selling Sparks Investor Concerns (https://www.marketwatch.com/story/coreweaves-stock-slides-as-insider-selling-sparks-investor-concerns-fef032fe)
[3] How Does Reinforcement Learning Power Agentic AI Systems (https://www.getmonetizely.com/articles/how-does-reinforcement-learning-power-agentic-ai-systems)
[4] CoreWeave Just Gave This Data Center Stock a Big Boost (https://finance.yahoo.com/news/coreweave-just-gave-data-center-162455061.html)
[5] CoreWeave to Acquire OpenPipe, Leader in Reinforcement Learning (https://www.businesswire.com/news/home/20250903667712/en/CoreWeave-to-Acquire-OpenPipe-Leader-in-Reinforcement-Learning)
[6] CoreWeave to Participate in the Goldman Sachs Communacopia + Technology Conference (https://investors.coreweave.com/news/news-details/2025/CoreWeave-to-Participate-in-the-Goldman-Sachs-Communacopia–Technology-Conference/default.aspx)
Tools & Platforms
New AI Vaccine Research Program Launched With Ellison Institute Of Technology

The University of Oxford has recently announced an ambitious new project aimed at revolutionizing vaccine development. This initiative is supported by significant research funding of £118 million, which the university has secured through its strategic partnership with the Ellison Institute of Technology (EIT). The goal of this project is to develop innovative solutions to some of the most challenging infectious diseases that continue to threaten public health worldwide.
At the core of this new program is the Oxford Vaccine Group, a highly experienced and respected team within the university’s Department of Paediatrics. The project, named CoI-AI (Correlates of Immunity-Artificial Intelligence), aims to leverage modern technology and scientific expertise to enhance understanding of how the human immune system responds to various pathogens.
This initiative seeks to combine Oxford’s extensive knowledge in human challenge studies, immune science, and vaccine development with EIT’s cutting-edge artificial intelligence (AI) technology. This collaboration promises to pave the way for more effective and targeted approaches to preventing and controlling infectious diseases.
The research will focus on understanding how immune defenses react to some of the most problematic germs that cause severe infections and contribute to the rise of antibiotic resistance. These include bacteria such as Streptococcus pneumoniae, Staphylococcus aureus, and Escherichia coli, which are responsible for widespread illnesses and have become resistant to traditional vaccine strategies.
To achieve this, scientists will use human challenge models, where volunteers are carefully and safely exposed to bacteria within controlled environments. This approach allows researchers to observe immune responses directly and in detail. By integrating modern immunology techniques with advanced AI tools, the team hopes to identify the immune responses that are most predictive of protection, thereby informing the development of more effective vaccines.
In December 2024, Oxford University and EIT announced a strategic alliance aimed at fostering long-term collaboration. This partnership aims not only to develop innovative solutions for pressing health challenges but also to cultivate the next generation of scientific leaders. EIT’s unique approach combines advanced research capabilities with a strong commercial focus, striving to generate sustainable and ethically grounded scientific breakthroughs. The alliance brings together diverse talents and expertise across multiple fields, including generative biology, clinical medicine, plant science, sustainable energy, and public policy.
Supporting these endeavors is a robust computing infrastructure facilitated by Oracle, along with a world-class artificial intelligence team. Additionally, the partnership includes a Scholars program dedicated to nurturing the next wave of the world’s top scientists. Through these combined efforts, Oxford and EIT are working together to address some of the most enduring and complex health challenges of their time, all with the goal of improving global health. outcomes.
KEY QUOTES:
“This programme addresses one of the most urgent problems in infectious disease by helping us to understand immunity more deeply to develop innovative vaccines against deadly diseases that have so far evaded our attempts at prevention. By combining advanced immunology with artificial intelligence, and using human challenge models to study diseases, CoI-AI will provide the tools we need to tackle serious infections and reduce the growing threat of antibiotic resistance. This is a new frontier in vaccine science.”
Professor Andrew Pollard, Director of the Oxford Vaccine Group
“This programme will give us completely new tools to study how vaccines work at both a cellular and system-wide level, by studying infections in real time, in people, and using smart immunology tools and data to find the answers. This will open up whole new avenues to vaccine design as we improve our understanding of infection and immunity.”
Professor Daniela Ferreira, Deputy Director of the Oxford Vaccine Group
“Researchers in the CoI-AI programme will use Artificial Intelligence models developed at EIT to identify and better understand the immune responses that predict protection. This vaccine development programme combines Oxford’s leadership in immunology and human challenge models with cutting-edge AI, laying the groundwork for a new era of vaccine discovery – one that is faster, smarter, and better able to respond to infectious disease outbreaks throughout the world.”
Larry Ellison, Chairman of the Ellison Institute of Technology
“This is a major step forward in our strategic alliance with the Ellison Institute. Together, we’re combining Oxford’s strengths in vaccine science with EIT’s bold vision to tackle some of the toughest problems in global health. This is about drawing more talent and capacity to the Oxford ecosystem to turn scientific challenges into real solutions for the world.”
Professor Irene Tracey, Vice-Chancellor of the University of Oxford
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions