AI Research
Leadership: Artificial Intelligence in Decision-Making | Article
Despite the recent announcement from the Department of Defense (DoD), I posit that Artificial Intelligence (AI) cannot replace the critical human factor in leadership decision-making. The Hill recently published an article outlining the formation of a new cell, Artificial Intelligence Rapid Capabilities Cell (AI RCC), whose namesake unsurprisingly gives insight into its purpose.1 The AI RCC is charged with improving the speed at which the military implements AI technology, focusing on generative AI. What I found alarming was how this new office was going to utilize AI: “command and control, autonomous drones, intelligence, weapons testing, and even for enterprise management like financial systems and human resources.”
To frame my argument, it’s important to ensure that some terms are defined and put into context. My former boss, Lt. Gen. Stanton, routinely and with much fervor repeated, “you cannot, as a professional in this field (Cyber Corps), use the terms AI or machine learning (ML) without putting them into context.” So, what is AI? When thinking of AI, many people conjure up ideas brought to them from the Hollywood big screen, such as robots taking over the world or the AI “Skynet” deciding that humanity is a threat and must be eradicated. However, AI is loosely defined as the ability of machines (computers) to perform tasks that humans do with their brains.2
There is also a subset of AI known as Artificial General Intelligence (AGI), which has been slow in development as it seeks to provide machines with comparable human intelligence, able to perform any intellectual task that humans can.3 Machine learning is a subset of AI and if set up properly, helps make predictions and reduces mistakes that arise from merely guessing.4 Generative AI is a sub-field of machine learning, capable of developing content such as text, visual depictions, audio, code, and synthetic datasets.5 Since this is a military-focused article, I would be remiss not to mention CamoGPT, which incorporates data from joint and Army doctrine, lessons learned, best practices [and] Training and Doctrine Command content, among other sources.6 To understand better, it must be noted that machine learning is made possible by using large language models.
So, what is a large language model (LLM)? LLMs are a category of foundation models trained through data input/output sets using immense amounts of data. This data could have billions of parameters, enabling the LLM to understand and generate content to perform a wide range of tasks. While many are familiar with OpenAI’s GPT-3 and 4 LLM, popular LLMs include open models such as Google’s LaMDA and PaLM LLM (the basis for Bard), Hugging Face’s BLOOM and XLM-RoBERTa, Nvidia’s NeMO LLM, XLNet, Co:here, and GLM-130B.
Further scoping my position, this article focuses on two aspects of the AI RCC priorities of implementing AI technology within the Warfighting Functions of Intelligence and Command and Control. Army Doctrine Publication 3-0, Operations, defines a warfighting function as “a group of tasks and systems united by a common purpose that commanders use to accomplish missions and training objectives.”7 Human factors are prevalent in every element of operational planning. From the intelligence officer assessing enemy COAs to the operations officer creating the friendly COAs, and the leader selecting the best course of action, the human element cannot be overlooked.
An example of how the DoD is using AI was an endeavor started in 2017, Project Maven, transitioned to the National Geospatial Intelligence Agency in 2022.8 Specifically, the project established the “Algorithmic Warfare Cross-Functional Team (AWCFT) to accelerate DoD’s integration of [AI]…to turn the enormous volume of data available to DoD into actionable intelligence and insights at speed.”9 This project successfully analyzed massive amounts of data collected from unmanned aerial systems (UAS). The DoD used UAS to capture video feed of the battlefields in Iraq and Syria against the Islamic State; however, it lacked the capacity to process, exploit, and disseminate (PED) the feed in a timely manner, rendering the data useless. The AWCFT created algorithms to review the full motion video (FMV) in near-real time, classifying objects and alerting analysts if there were irregularities.
As a former intelligence officer, the term intelligence drives operations (and operations drives intelligence) was repeated often at professional military education and at my assigned units. The Intelligence Warfighting Function is defined in ADP 2-0, Intelligence, as the related tasks and systems that facilitate understanding the enemy, terrain, weather, civil considerations, and other significant aspects of the operational environment.10 Intelligence enables command and control, facilitates initiative, and allows commanders to develop situational understanding and take decisive action to overcome complex issues that leaders are faced with in today’s multidomain battlefield. While intelligence can help lift “the fog of war”, what Clausewitz aptly described as unknown factors, it is the leader who is charged with shaping the situation and making decisions to seize the initiative over the adversary.11
ADP 3-0 defines the Command-and-Control Warfighting Function as the related tasks and a system that enables commanders to synchronize and converge all elements of combat power. Its main purpose is to assist commanders in integrating the other elements of combat power (leadership, information, movement and maneuver, intelligence, fires, sustainment, and protection) to achieve objectives and accomplish missions.12 It’s easy to grasp why this warfighting function is so critical as it establishes the process to drive operations across all elements of military functions.
If intelligence enables Command and Control, what if the data that drives the intelligence or the data that feeds all warfighting functions becomes corrupted? I agree with Deputy Defense Secretary Hicks that the main reason for integrating AI into military operations is straightforward, it improves decision advantage.13 However, only one year has passed since the Pentagon unveiled the Data, Analytics and Artificial Intelligence Strategy, and the development of AI in the United States has not advanced to the point where is should transition from improving decision making for military leaders to allowing AI technology to make decisions—especially in the war fighting functions tasked to the AI RCC charter. In my opinion, these are the most critical among all six warfighting functions, and while technology should be used to assist military commanders, it should not supplant their decision making. There should always be a human-in-the-loop element when it comes to these types of decisions; if not in-the-loop, minimally, humans-on-the-loop should be maintained within the decision-making process where AI is concerned.
The reason that a human must remain in the decision-making cycle is simple: AI can produce false and misleading information and just like any other technology, it can be “hacked.” No matter how good the program purportedly is, technology is riddled with security issues—hence the need for routine updates (e.g. patches, protocols, etc.). Recall earlier in the article, LLMs require billions of parameters to be used for the data sets to generate useful information. Not only can these data sets be biased, but they can also be unreliable, incomplete, or otherwise undesirable, producing bizarre outputs called hallucinations. Some of these hallucinations can produce false information. Furthermore, humans build the software that drives these AI technologies, and humans are imperfect-they make mistakes. These mistakes create attack surfaces, or opportunities for hackers to take advantage of the mistakes for their benefit.14
While there are different motivations that drive hackers, this article will focus on nation states whose cyber operations are ultimately to assist their country in dominating and winning its wars. The adversarial cyber operator could take advantage of the programming mistakes and enable them to purposefully change parameters that AI technology uses. Recall earlier the great work done by Project Maven: what if an adversary changed the parameters set by the DoD, replacing them with their own? An example could be that the UAS data no longer identifies structures, buildings, personnel, weapons or equipment as intended when using corrupt AI technology.
Research has already been successful in highlighting ML models are vulnerable to malicious inputs to produce erroneous outputs, which appear unmodified to human observers. Researchers successfully attacked a deep neural network (DNN) hosted by MetaMind and found it misclassified 84.24% of the adversarial examples crafted with its substitute. In their study, the researchers conducted the same attack against models hosted by Amazon and Google, yielding adversarial examples misclassified at rates of 96.19% and 88.94%. Their study also highlighted their approach was capable of evading defense strategies previously found to make adversarial example crafting harder.15
Although humans are imperfect beings, the imperfection is why humans remain superior to robots, as they are not constrained by programming and can adapt to unforeseen changes. This is also true for our military, despite being transparent and publishing our tactics, techniques, and procedures (TTPs), our enemies have been baffled when we don’t always follow those TTPs on the battlefield. That’s because TTPs are merely guidelines, and commanders utilize mission command delegate authority to subordinate leaders, empowering them to accomplish tasks with the given resources and determine the best course of action to meet mission requirements. U.S. history is rich in countless battles where the initiative was seized due to creative leaders at all echelons.
What makes a good leader? Since football terms are often used to understand cyber operations (i.e. offense and defense) the author highlights a quote by the National Football League (NFL) Hall of Fame coach, Vince Lombardi, “Leaders aren’t born, they are made and they are made just like anything else, through hard work.”16 Prior to the NFL, Lombardi was an offensive line coach at West Point where he likely learned the foundation of good leadership. ADP 6-22, Army Leadership and the Profession, highlights the characteristics of a good leader. While one can read about leadership, it is through experiences, both successful and failures, that develop leaders, just as Lombardi stated. It takes effort to learn TTPs, conduct battle drills, care for your people, disagree with superiors, and even admit when you’re wrong. But these are the qualities that leaders have obtained and sharpened through experiences that enable them to make decisions.
While AI/ML technologies will certainly continue to assist our military, there will always be a human factor that cannot be overlooked. Experience, gut feeling, and leadership are all influenced by human factors. Lastly, DoD leaders have routinely stated that the secret to its success, time and time again, boils down to leadership, the ingenuity of our NCO corps, and the ability for leaders at echelon to make decisions.
Even our adversary, Russia, has a U.S. movie based on a true story about a military officer who prevented World War Three during the Cold War; the officer refused to trust their radars that falsely indicated that the U.S. had launched numerous ballistic missiles aimed to destroy them.17 To continue our military prowess, Artificial Intelligence should never replace the critical human element in leadership decision-making. There must always be a human-in-the-loop.
References
1. Dress, Brad. “Pentagon Announces New AI Office as It Looks to Deploy Autonomous Weapons.” The Hill. December 11, 2024. https://thehill.com/policy/defense/5034805.
2. Potember, Richard. “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD.” Federation of American Scientists. January 1, 2017. https://irp.fas.org/agency/dod/jason/ai-dod.pdf.
3. Ibid.
4. IBM Data and AI Team. “AI Vs. Machine Learning Vs. Deep Learning Vs. Neural Networks: What’s the Difference?” IBM. July 6, 2023. https://www.ibm.com/think/topics/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks.
5. “What Is Generative AI?” Innodata. December 14, 2024. https://innodata.com/what-is-generative-ai/.
6. South, Todd. “From CamoGPT to Life Skills, the Army Is Changing How It Trains Troops.” Defense News. October 16, 2024. https://www.defensenews.com/land/2024/10/16/from-camogpt-to-life-skills-the-army-is-changing-how-it-trains-troops/.
7. U.S. Department of Defense. “ADP 3-0 Operations.” Army Publishing Directorate. July 31, 2019. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN43323-ADP_3-0-000-WEB-1.pdf.
8. Hitchens, Theresa. “Pentagon’s Flagship AI Effort, Project Maven, Moves to NGA.” Breaking Defense. April 27, 2022. https://breakingdefense.com/2022/04/pentagons-flagship-ai-effort-project-maven-moves-to-nga/.
9. U.S. Department of Defense. “Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven).” National Security Archive. The George Washington University, April 26, 2017. https://nsarchive.gwu.edu/document/18583-national-security-archive-department-defense.
10. U.S. Department of Defense. “ADP 2-0 Intelligence.” Army Publishing Directorate. July 31, 2019. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18009-ADP_2-0-000-WEB-2.pdf.
11. Clausewitz, Carl V. 1976. On War. Translated by Michael Howard and Peter Paret. Princeton: Princeton Press. https://www.usmcu.edu/Portals/218/EWS%20On%20War%20Reading%20Book%201%20Ch%201%20Ch%202.pdf.
12. U.S. Department of Defense. “ADP 3-0 Operations.” Army Publishing Directorate. July 31, 2019. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN43323-ADP_3-0-000-WEB-1.pdf.
13. Clark, Joseph. “DOD Releases AI Adoption Strategy.” Defense. U.S. Department of Defense, November 2, 2023. https://www.defense.gov/News/News-Stories/Article/Article/3578219/dod-releases-ai-adoption-strategy/.
14. Metz, Cade. “How To Fool AI Into Seeing Something That Isn’t There.” Wired. July 29, 2016. https://www.wired.com/2016/07/fool-ai-seeing-something-isnt/.
15. Papernot, Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. “Practical Black-Box Attacks against Machine Learning.” Arxiv. March 19, 2017. https://arxiv.org/pdf/1602.02697.
16. “Vince Lombardi.” The Official Website of Vince Lombardi. Accessed December 15, 2024. https://vincelombardi.com/.
17. Harvey, Ian. “The Man Who Saved the World – The Russian Who Avoided WWIII.” War History Online. October 1, 2015. https://www.warhistoryonline.com/featured/real-life-man-saved-world-russian.html.
AI Research
Political attitudes shape public perceptions of artificial intelligence
AI Research
Space technology: Lithuania’s promising space start-ups
Technology Reporter
I’m led through a series of concrete corridors at Vilnius University, Lithuania; the murals give a Soviet-era vibe, and it seems an unlikely location for a high-tech lab working on a laser communication system.
But that’s where you’ll find the headquarters of Astrolight, a six-year-old Lithuanian space-tech start-up that has just raised €2.8m ($2.3m; £2.4m) to build what it calls an “optical data highway”.
You could think of the tech as invisible internet cables, designed to link up satellites with Earth.
With 70,000 satellites expected to launch in the next five years, it’s a market with a lot of potential.
The company hopes to be part of a shift from traditional radio frequency-based communication, to faster, more secure and higher-bandwidth laser technology.
Astrolight’s space laser technology could have defence applications as well, which is timely given Russia’s current aggressive attitude towards its neighbours.
Astrolight is already part of Nato’s Diana project (Defence Innovation Accelerator for the North Atlantic), an incubator, set up in 2023 to apply civilian technology to defence challenges.
In Astrolight’s case, Nato is keen to leverage its fast, hack-proof laser communications to transmit crucial intelligence in defence operations – something the Lithuanian Navy is already doing.
It approached Astrolight three years ago looking for a laser that would allow ships to communicate during radio silence.
“So we said, ‘all right – we know how to do it for space. It looks like we can do it also for terrestrial applications’,” recalls Astrolight co-founder and CEO Laurynas Maciulis, who’s based in Lithuania’s capital, Vilnius.
For the military his company’s tech is attractive, as the laser system is difficult to intercept or jam.
It’s also about “low detectability”, Mr Maciulis adds:
“If you turn on your radio transmitter in Ukraine, you’re immediately becoming a target, because it’s easy to track. So with this technology, because the information travels in a very narrow laser beam, it’s very difficult to detect.”
Worth about £2.5bn, Lithuania’s defence budget is small when you compare it to larger countries like the UK, which spends around £54bn a year.
But if you look at defence spending as a percentage of GDP, then Lithuania is spending more than many bigger countries.
Around 3% of its GDP is spent on defence, and that’s set to rise to 5.5%. By comparison, UK defence spending is worth 2.5% of GDP.
Recognised for its strength in niche technologies like Astrolight’s lasers, 30% of Lithuania’s space projects have received EU funding, compared with the EU national average of 17%.
“Space technology is rapidly becoming an increasingly integrated element of Lithuania’s broader defence and resilience strategy,” says Invest Lithuania’s Šarūnas Genys, who is the body’s head of manufacturing sector, and defence sector expert.
Space tech can often have civilian and military uses.
Mr Genys gives the example of Lithuanian life sciences firm Delta Biosciences, which is preparing a mission to the International Space Station to test radiation-resistant medical compounds.
“While developed for spaceflight, these innovations could also support special operations forces operating in high-radiation environments,” he says.
He adds that Vilnius-based Kongsberg NanoAvionics has secured a major contract to manufacture hundreds of satellites.
“While primarily commercial, such infrastructure has inherent dual-use potential supporting encrypted communications and real-time intelligence, surveillance, and reconnaissance across NATO’s eastern flank,” says Mr Genys.
Going hand in hand with Astrolight’s laser technology is the autonomous satellite navigation system fellow Lithuanian space-tech start-up Blackswan Space has developed.
Blackswan Space’s “vision based navigation system” allows satellites to be programmed and repositioned independently of a human based at a ground control centre who, its founders say, won’t be able to keep up with the sheer volume of satellites launching in the coming years.
In a defence environment, the same technology can be used to remotely destroy an enemy satellite, as well as to train soldiers by creating battle simulations.
But the sales pitch to the Lithuanian military hasn’t necessarily been straightforward, acknowledges Tomas Malinauskas, Blackswan Space’s chief commercial officer.
He’s also concerned that government funding for the sector isn’t matching the level of innovation coming out of it.
He points out that instead of spending $300m on a US-made drone, the government could invest in a constellation of small satellites.
“Build your own capability for communication and intelligence gathering of enemy countries, rather than a drone that is going to be shot down in the first two hours of a conflict,” argues Mr Malinauskas, also based in Vilnius.
“It would be a big boost for our small space community, but as well, it would be a long-term, sustainable value-add for the future of the Lithuanian military.”
Eglė Elena Šataitė is the head of Space Hub LT, a Vilnius-based agency supporting space companies as part of Lithuania’s government-funded Innovation Agency.
“Our government is, of course, aware of the reality of where we live, and that we have to invest more in security and defence – and we have to admit that space technologies are the ones that are enabling defence technologies,” says Ms Šataitė.
The country’s Minister for Economy and Innovation, Lukas Savickas, says he understands Mr Malinauskas’ concern and is looking at government spending on developing space tech.
“Space technology is one of the highest added-value creating sectors, as it is known for its horizontality; many space-based solutions go in line with biotech, AI, new materials, optics, ICT and other fields of innovation,” says Mr Savickas.
Whatever happens with government funding, the Lithuanian appetite for innovation remains strong.
“We always have to prove to others that we belong on the global stage,” says Dominykas Milasius, co-founder of Delta Biosciences.
“And everything we do is also geopolitical… we have to build up critical value offerings, sciences and other critical technologies, to make our allies understand that it’s probably good to protect Lithuania.”
AI Research
How Is AI Changing The Way Students Learn At Business School?
Artificial intelligence is the skill set that employers increasingly want from future hires. Find out how b-schools are equipping students to use AI
Business students are already seeing AI’s value. More than three-quarters of business schools have already integrated AI into their curricula—from essay writing to personal tutoring, career guidance to soft-skill development.
BusinessBecause hears from current business students about how AI is reshaping the business school learning experience.
The benefits and drawbacks of using AI for essay writing
Many business school students are gaining firsthand experience of using AI to assist their academic work. At Rotterdam School of Management, Erasmus University in the Netherlands, students are required to use AI tools when submitting essays, alongside a log of their interactions.
“I was quite surprised when we were explicitly instructed to use AI for an assignment,” said Lara Harfner, who is studying International Business Administration (IBA) at RSM. “I liked the idea. But at the same time, I wondered what we would be graded on, since it was technically the AI generating the essay.”
Lara decided to approach this task as if she were writing the essay herself. She began by prompting the AI to brainstorm around the topic, research areas using academic studies and build an outline, before asking it to write a full draft.
However, during this process Lara encountered several problems. The AI-generated sources were either non-existent or inappropriate, and the tool had to be explicitly instructed on which concepts to focus on. It tended to be too broad, touching on many ideas without thoroughly analyzing any of them.
“In the end, I felt noticeably less connected to the content,” Lara says. “It didn’t feel like I was the actual author, which made me feel less responsible for the essay, even though it was still my name on the assignment.”
Despite the result sounding more polished, Lara thought she could have produced a better essay on her own with minimal AI support. What’s more, the grades she received on the AI-related assignments were below her usual average. “To me, that shows that AI is a great support tool, but it can’t produce high-quality academic work on its own.”
AI-concerned employers who took part in the Corporate Recruiters Survey echo this finding, stating that they would rather GME graduates use AI as a strategic partner in learning and strategy, than as a source for more and faster content.
How business students use AI as a personal tutor
Daniel Carvalho, a Global Online MBA student, also frequently uses AI in his academic assignments, something encouraged by his professors at Porto Business School (PBS).
However, Daniel treats AI as a personal tutor, asking it to explain complex topics in simple terms and deepen the explanation. On top of this, he uses it for brainstorming ideas, summarizing case studies, drafting presentations and exploring different points of view.
“My MBA experience has shown me how AI, when used thoughtfully, can significantly boost productivity and effectiveness,” he says.
Perhaps one of the most interesting ways Daniel uses AI is by turning course material into a personal podcast. “I convert text-based materials into audio using text-to-speech tools, and create podcast-style recaps to review content in a more conversational and engaging way. This allows me to listen to the materials on the go—in the car or at the gym.”
While studying his financial management course, Daniel even built a custom GPT using course materials. Much like a personal tutor, it would ask him questions about the material, validate his understanding, and explain any questions he got wrong. “This helped reinforce my knowledge so effectively that I was able to correctly answer all multiple-choice questions in the final exam,” he explains.
Similarly, at Villanova School of Business in the US, Master of Science in Business Analytics and AI (MSBAi) students are building personalized AI bots with distinct personalities. Students embed reference materials into the bot which then shape how the bot responds to questions.
“The focus of the program is to apply these analytics and AI skills to improve business results and career outcomes,” says Nathan Coates, MSBAi faculty director at the school. “Employers are increasingly looking for knowledge and skills for leveraging GenAI within business processes. Students in our program learn how AI systems work, what their limitations are, and what they can do better than existing solutions.”
The common limitations of using AI for academic work
Kristiina Esop, who is studying a doctorate in Business Administration and Management at Estonian Business School, agrees that AI in education must always be used critically and with intention. She warns students should always be aware of AI’s limitations.
Kristiina currently uses AI tools to explore different scenarios, synthesize large volumes of information, and detect emerging debates—all of which are essential for her work both academically and professionally.
However, she cautions that AI tools are not 100% accurate. Kristiina once asked ChatGPT to map actors in circular economy governance, and it returned a neat, simplified diagram that ignored important aspects. “That felt like a red flag,” she says. “It reminded me that complexity can’t always be flattened into clean logic. If something feels too easy, too certain—that’s when it is probably time to ask better questions.”
To avoid this problem, Kristiina combines the tools with critical thinking and contextual reading, and connects the findings back to the core questions in her research. “I assess the relevance and depth of the sources carefully,” she says. “AI can widen the lens, but I still need to focus it myself.”
She believes such critical thinking when using AI is essential. “Knowing when to question AI-generated outputs, when to dig deeper, and when to disregard a suggestion entirely is what builds intellectual maturity and decision-making capacity,” she says.
This is also what Wharton management professor Ethan Mollick, author of Co Intelligence: Living and Working with AI and co-director of the Generative AI Lab believes. He says the best way to work with [generative AI] is to treat it like a person. “So you’re in this interesting trap,” he says. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.”
Hult International Business School, too, expects its students to use AI in a balanced way, encouraging them to think critically about when and how to use it. For example, Rafael Martínez Quiles, a Master’s in Business Analytics student at Hult, uses AI as a second set of eyes to review his thinking.
“I develop my logic from scratch, then use AI to catch potential issues or suggest improvements,” he explains. “This controlled, feedback-oriented approach strengthens both the final product and my own learning.”
At Hult, students engage with AI to solve complex, real-world challenges as part of the curriculum. “Practical business projects at Hult showed me that AI is only powerful when used with real understanding,” says Rafael. “It doesn’t replace creativity or business acumen, it supports it.”
As vice president of Hult’s AI Society, N-AIble, Rafael has seen this mindset in action. The society’s members explore AI ethically, using it to augment their work, not automate it. “These experiences have made me even more confident and excited about applying AI in the real world,” he says.
The AI learning tools students are using to improve understanding
In other business schools, AI is being used to offer faculty a second pair of hands. Nazarbayev University Graduate School of Business has recently introduced an ‘AI Jockey’. Appearing live on a second screen next to the lecturer’s slides, this AI tool acts as a second teacher, providing real-time clarifications, offering alternate examples, challenging assumptions, and deepening explanations.
“Students gain access to instant, tailored explanations that complement the lecture, enhancing understanding and engagement,” says Dr Tom Vinaimont, assistant professor of finance, Nazarbayev University Graduate School of Business, who uses the AI jockey in his teaching.
Rather than replacing the instructor, the AI enhances the learning experience by adding an interactive, AI-driven layer to traditional teaching, transforming learning into a more dynamic, responsive experience.
“The AI Jockey model encourages students to think critically about information, question the validity of AI outputs, and build essential AI literacy. It helps students not only keep pace with technological change but also prepares them to lead in an AI-integrated world by co-creating knowledge in real time,” says Dr Vinaimont.
How AI can be used to encourage critical thinking among students
So, if you’re looking to impress potential employers, learning to work with AI while a student is a good place to start. But simply using AI tools isn’t enough. You must think critically, solve problems creatively and be aware of AI’s limitations.
Most of all, you must be adaptable. GMAC’s new AI-powered tool, Advancery, helps you find graduate business programs tailored to your career goals, with AI-readiness in mind.
After all, working with AI is a skill in itself. And in 2025, it is a valuable one.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business7 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way