AI Research
Faster, Smarter, Cheaper: AI Is Reinventing Market Research
For decades, companies have poured billions of dollars into market research to better understand their customers, only to be constrained by slow surveys, biased panels, and lagging insights. Despite the $140 billion spent each year on market research, software is little more than a rounding error. Case in point: Traditional human-driven consulting firms Gartner and McKinsey are each valued at $40 billion, while software platforms Qualtrics and Medallia are worth $12.5 billion and $6.4 billion, respectively. And that’s just accounting for external spend.
With AI, we’re seeing yet another case of a market ready to shift labor spend into software. Early AI players are already leveraging speech-to-text and text-to-speech models to build AI-native survey platforms that conduct autonomous video interviews with people, then use LLMs to analyze results and create presentations. Those early movers are growing quickly, signing large deals, and co-opting budget that traditionally went to market research and consulting firms.
In doing so, these AI-enabled startups are reshaping how organizations derive insights from customers, make decisions, and execute at scale. However, most of these startups still rely on panel providers to source humans for surveys.
Now we’re seeing a crop of AI research companies replace the expensive human survey and analysis process entirely. Instead of recruiting a panel of people and asking them what they think, these companies can go as far as simulating entire societies of generative AI agents that can be queried, observed, and experimented with, modeling real human behavior. This turns market research from a lagging, one-time input into a continuous, dynamic advantage.
Where market research is today
The field of customer research has slowly incorporated software over time. In the 1990s, research was primarily conducted manually, with pen and paper data collection and analysis. Qualtrics and Medallia, among others, introduced online surveys in the early 2000s, followed by real-time analytics and mobile-based survey collection. Both companies used surveys to build deeper experience management tools around customers and employees. In parallel, the rise of bottom-up, self-serve tools like SurveyMonkey enabled individual teams to run quick, lightweight surveys — broadening access to research, but often resulting in fragmented efforts, inconsistent methodologies, and limited organizational visibility. These tools lacked the governance, scale, and integration required to support enterprise-wide research operations.
Consulting firms, McKinsey included, built entire divisions dedicated to deploying software-based research tools for customer segmentation and consumer insights at scale. These engagements often took months, cost millions, and relied on expensive and biased panels. The process of research often takes weeks to recruit a panel of participants, run the survey, analyze the results, then create reporting. And then the survey results are usually delivered to the buyer in packaged form, without much opportunity to revisit the process or dive deeper into the findings.
Most enterprises still rely on quarterly research to guide major launches, but that doesn’t provide the ongoing insights needed for fast, everyday decisions. Because traditional research is expensive, small bets and early ideas often go untested. Even companies eager to modernize find themselves stuck with outdated tools and slow processes.
In the late 2010s, a new wave of UX research tools emerged that was built directly for product teams, not consultants or survey ops. Instead of outsourcing user research, companies began embedding it into their development loops. Through unmoderated usability tests, in-product surveys, and prototype feedback, tools like Sprig, Maze, and Dovetail enabled faster, customer-informed decisions. These research tools demonstrated just how important integrated research is in modern businesses. But while such tools provided real-time value for software-driven teams, they were less oriented toward non-software companies and were primarily optimized for team-level use, rather than cross-functional use. AI-native research companies build on the advances of UX research: insights are immediate and applicable across teams, products, and industries, whether software-native or not.
AI + market research: a natural fit
AI has already increased the pace and decreased the cost of surveying. AI makes it easy to generate surveys quickly and adapt questions in real time based on how people respond. Analysis that once took weeks now happens in hours. Insight libraries learn over time, spotting patterns across projects and extrapolating early signals. This shift doesn’t just make research accessible to smaller companies; it also expands the set of decisions that can be informed by data, from early product concepts to nuanced positioning questions that were previously too expensive to investigate. Now AI-powered research tools are being used by many more users across a company’s marketing, product, sales, and customer success teams, as well as leadership.
These improvements matter. But even AI-powered surveys are still limited by the variability and accessibility of human panels and often depend on third-party recruiting to access respondents, limiting pricing control and differentiation.
Generative agents: Simulated societies move beyond human panels
Enter generative agents, a concept originally introduced in the landmark paper Generative Agents: Interactive Simulacra of Human Behavior. The researchers demonstrated how simulated characters powered by large language models can exhibit increasingly human-like behavior, driven by memory, reflection, and planning. While the idea initially drew interest for its potential in building lifelike, simulated societies, its implications go beyond academic curiosity. One of its most promising commercial applications? Market research.
If this sounds abstract, here’s an example of how it might play out: Ahead of a new skincare launch in France, a beauty company could simulate 10,000 agents modeled on gen Z and millennial French beauty consumers. Each agent would be seeded with data from customer reviews, CRM histories, social listening insights (e.g. TikTok trends around “skincare routines”), and past purchase behavior. These agents could interact with each other, view simulated influencer content, shop virtual store shelves, and post product opinions in AI-generated social feeds, evolving over time as they absorb new information and reflect on past experiences.
What makes these simulations possible isn’t just off-the-shelf LLMs, but a growing stack of sophisticated techniques. Agents are now anchored in persistent memory architectures, often grounded in rich qualitative data like interviews or behavioral histories, enabling them to evolve over time through accumulated experiences and contextual feedback. In-context prompting supplies them with behavioral histories, environmental cues, and prior decisions, creating more nuanced, lifelike responses. Under the hood, methods like Retrieval-Augmented Generation (RAG) and agent chaining support complex, multi-step decision-making, resulting in simulations that mirror real-world customer journeys. Fine-tuned, multimodal models — trained across text, visuals, and interactions on domain-specific tasks — push agent behavior beyond the limits of text.
Early platforms are already leveraging these approaches. AI-powered simulation startups such as Simile and Aaru (which just announced a partnership with Accenture) hint at what’s coming: dynamic, always-on populations that act like real customers, ready to be queried, observed, and experimented with.
Agentic simulation doesn’t just accelerate workflows that once took weeks; it fundamentally reinvents how research and decision-making happens. It also overcomes many traditional research limitations by creating a research tool that can live inside a workflow. This leap is not just in efficiency. It’s in fidelity.
The playbook: fast distribution, deep integration
If history is any guide, the companies that dominate this AI wave won’t just have the best technology, they’ll master distribution and adoption. Qualtrics and Medallia, for example, won early by prioritizing adoption, familiarity, and loyalty, embedding themselves deeply into universities and key industries.
Accuracy obviously matters — particularly as teams measure AI tools against traditional, human-led research. But in this category, there are no established benchmarks or evaluation frameworks, which makes it difficult to objectively assess how “good” a given model is. Companies experimenting with agent simulation technology often have to define their own metrics.
Crucially, success doesn’t mean achieving 100% accuracy. It’s about hitting a threshold that’s “good enough” for your use case. Many CMOs we’ve spoken with are comfortable with outputs that are at least 70% as accurate as those from traditional consulting firms, especially since the data is cheaper, faster, and updated in real time. In the absence of standardized expectations, this creates a window for startups to move quickly, validate through real-world usage, and become embedded in workflows early. That said, startups must continue to refine the product: benchmarks will emerge, and the more you charge, the more customers will demand.
At this stage, the risk lies less in imperfect outputs than in over-engineering for theoretical accuracy. Startups that prioritize speed, integration, and distribution can define the emerging standard. Those that delay for perfect fidelity may find themselves stuck in endless pilots while others move to production.
AI-native research companies are fundamentally better positioned than traditional firms to redefine expectations for market research. While legacy market research firms may have deep panel data, their business models and workflows are not built for automation. In contrast, AI-native players have already developed purpose-built tooling for AI-moderated research and are structurally incentivized to push the frontier, not protect the past. They’re primed to own both the data layer and the simulation layer. The widely cited Generative Agent Simulations of 1,000 People paper illustrates this convergence: its coauthors relied on real interviews conducted by AI to seed agentic profiles — the same type of pipeline AI-native companies are already running at scale.
To drive impact, insights must be applicable beyond UX and marketing teams to product, strategy, and operations. The challenge: offering just enough service support without recreating the heavy overhead of traditional agencies.
The market research reckoning
The long era of lagging research is ending. AI-driven market research is transforming how we understand customers, whether through simulation, analysis, or insight generation. The companies that adopt AI-powered research tools early will gain faster insights, make better decisions, and unlock a new competitive edge. As shipping products becomes faster and easier, the real advantage lies in knowing what to build.
Building in this space?
Reach out to Zach Cohen (zcohen@a16z.com) and Seema Amble (samble@a16z.com).
AI Research
Political attitudes shape public perceptions of artificial intelligence
AI Research
Space technology: Lithuania’s promising space start-ups
Technology Reporter
I’m led through a series of concrete corridors at Vilnius University, Lithuania; the murals give a Soviet-era vibe, and it seems an unlikely location for a high-tech lab working on a laser communication system.
But that’s where you’ll find the headquarters of Astrolight, a six-year-old Lithuanian space-tech start-up that has just raised €2.8m ($2.3m; £2.4m) to build what it calls an “optical data highway”.
You could think of the tech as invisible internet cables, designed to link up satellites with Earth.
With 70,000 satellites expected to launch in the next five years, it’s a market with a lot of potential.
The company hopes to be part of a shift from traditional radio frequency-based communication, to faster, more secure and higher-bandwidth laser technology.
Astrolight’s space laser technology could have defence applications as well, which is timely given Russia’s current aggressive attitude towards its neighbours.
Astrolight is already part of Nato’s Diana project (Defence Innovation Accelerator for the North Atlantic), an incubator, set up in 2023 to apply civilian technology to defence challenges.
In Astrolight’s case, Nato is keen to leverage its fast, hack-proof laser communications to transmit crucial intelligence in defence operations – something the Lithuanian Navy is already doing.
It approached Astrolight three years ago looking for a laser that would allow ships to communicate during radio silence.
“So we said, ‘all right – we know how to do it for space. It looks like we can do it also for terrestrial applications’,” recalls Astrolight co-founder and CEO Laurynas Maciulis, who’s based in Lithuania’s capital, Vilnius.
For the military his company’s tech is attractive, as the laser system is difficult to intercept or jam.
It’s also about “low detectability”, Mr Maciulis adds:
“If you turn on your radio transmitter in Ukraine, you’re immediately becoming a target, because it’s easy to track. So with this technology, because the information travels in a very narrow laser beam, it’s very difficult to detect.”
Worth about £2.5bn, Lithuania’s defence budget is small when you compare it to larger countries like the UK, which spends around £54bn a year.
But if you look at defence spending as a percentage of GDP, then Lithuania is spending more than many bigger countries.
Around 3% of its GDP is spent on defence, and that’s set to rise to 5.5%. By comparison, UK defence spending is worth 2.5% of GDP.
Recognised for its strength in niche technologies like Astrolight’s lasers, 30% of Lithuania’s space projects have received EU funding, compared with the EU national average of 17%.
“Space technology is rapidly becoming an increasingly integrated element of Lithuania’s broader defence and resilience strategy,” says Invest Lithuania’s Šarūnas Genys, who is the body’s head of manufacturing sector, and defence sector expert.
Space tech can often have civilian and military uses.
Mr Genys gives the example of Lithuanian life sciences firm Delta Biosciences, which is preparing a mission to the International Space Station to test radiation-resistant medical compounds.
“While developed for spaceflight, these innovations could also support special operations forces operating in high-radiation environments,” he says.
He adds that Vilnius-based Kongsberg NanoAvionics has secured a major contract to manufacture hundreds of satellites.
“While primarily commercial, such infrastructure has inherent dual-use potential supporting encrypted communications and real-time intelligence, surveillance, and reconnaissance across NATO’s eastern flank,” says Mr Genys.
Going hand in hand with Astrolight’s laser technology is the autonomous satellite navigation system fellow Lithuanian space-tech start-up Blackswan Space has developed.
Blackswan Space’s “vision based navigation system” allows satellites to be programmed and repositioned independently of a human based at a ground control centre who, its founders say, won’t be able to keep up with the sheer volume of satellites launching in the coming years.
In a defence environment, the same technology can be used to remotely destroy an enemy satellite, as well as to train soldiers by creating battle simulations.
But the sales pitch to the Lithuanian military hasn’t necessarily been straightforward, acknowledges Tomas Malinauskas, Blackswan Space’s chief commercial officer.
He’s also concerned that government funding for the sector isn’t matching the level of innovation coming out of it.
He points out that instead of spending $300m on a US-made drone, the government could invest in a constellation of small satellites.
“Build your own capability for communication and intelligence gathering of enemy countries, rather than a drone that is going to be shot down in the first two hours of a conflict,” argues Mr Malinauskas, also based in Vilnius.
“It would be a big boost for our small space community, but as well, it would be a long-term, sustainable value-add for the future of the Lithuanian military.”
Eglė Elena Šataitė is the head of Space Hub LT, a Vilnius-based agency supporting space companies as part of Lithuania’s government-funded Innovation Agency.
“Our government is, of course, aware of the reality of where we live, and that we have to invest more in security and defence – and we have to admit that space technologies are the ones that are enabling defence technologies,” says Ms Šataitė.
The country’s Minister for Economy and Innovation, Lukas Savickas, says he understands Mr Malinauskas’ concern and is looking at government spending on developing space tech.
“Space technology is one of the highest added-value creating sectors, as it is known for its horizontality; many space-based solutions go in line with biotech, AI, new materials, optics, ICT and other fields of innovation,” says Mr Savickas.
Whatever happens with government funding, the Lithuanian appetite for innovation remains strong.
“We always have to prove to others that we belong on the global stage,” says Dominykas Milasius, co-founder of Delta Biosciences.
“And everything we do is also geopolitical… we have to build up critical value offerings, sciences and other critical technologies, to make our allies understand that it’s probably good to protect Lithuania.”
AI Research
How Is AI Changing The Way Students Learn At Business School?
Artificial intelligence is the skill set that employers increasingly want from future hires. Find out how b-schools are equipping students to use AI
Business students are already seeing AI’s value. More than three-quarters of business schools have already integrated AI into their curricula—from essay writing to personal tutoring, career guidance to soft-skill development.
BusinessBecause hears from current business students about how AI is reshaping the business school learning experience.
The benefits and drawbacks of using AI for essay writing
Many business school students are gaining firsthand experience of using AI to assist their academic work. At Rotterdam School of Management, Erasmus University in the Netherlands, students are required to use AI tools when submitting essays, alongside a log of their interactions.
“I was quite surprised when we were explicitly instructed to use AI for an assignment,” said Lara Harfner, who is studying International Business Administration (IBA) at RSM. “I liked the idea. But at the same time, I wondered what we would be graded on, since it was technically the AI generating the essay.”
Lara decided to approach this task as if she were writing the essay herself. She began by prompting the AI to brainstorm around the topic, research areas using academic studies and build an outline, before asking it to write a full draft.
However, during this process Lara encountered several problems. The AI-generated sources were either non-existent or inappropriate, and the tool had to be explicitly instructed on which concepts to focus on. It tended to be too broad, touching on many ideas without thoroughly analyzing any of them.
“In the end, I felt noticeably less connected to the content,” Lara says. “It didn’t feel like I was the actual author, which made me feel less responsible for the essay, even though it was still my name on the assignment.”
Despite the result sounding more polished, Lara thought she could have produced a better essay on her own with minimal AI support. What’s more, the grades she received on the AI-related assignments were below her usual average. “To me, that shows that AI is a great support tool, but it can’t produce high-quality academic work on its own.”
AI-concerned employers who took part in the Corporate Recruiters Survey echo this finding, stating that they would rather GME graduates use AI as a strategic partner in learning and strategy, than as a source for more and faster content.
How business students use AI as a personal tutor
Daniel Carvalho, a Global Online MBA student, also frequently uses AI in his academic assignments, something encouraged by his professors at Porto Business School (PBS).
However, Daniel treats AI as a personal tutor, asking it to explain complex topics in simple terms and deepen the explanation. On top of this, he uses it for brainstorming ideas, summarizing case studies, drafting presentations and exploring different points of view.
“My MBA experience has shown me how AI, when used thoughtfully, can significantly boost productivity and effectiveness,” he says.
Perhaps one of the most interesting ways Daniel uses AI is by turning course material into a personal podcast. “I convert text-based materials into audio using text-to-speech tools, and create podcast-style recaps to review content in a more conversational and engaging way. This allows me to listen to the materials on the go—in the car or at the gym.”
While studying his financial management course, Daniel even built a custom GPT using course materials. Much like a personal tutor, it would ask him questions about the material, validate his understanding, and explain any questions he got wrong. “This helped reinforce my knowledge so effectively that I was able to correctly answer all multiple-choice questions in the final exam,” he explains.
Similarly, at Villanova School of Business in the US, Master of Science in Business Analytics and AI (MSBAi) students are building personalized AI bots with distinct personalities. Students embed reference materials into the bot which then shape how the bot responds to questions.
“The focus of the program is to apply these analytics and AI skills to improve business results and career outcomes,” says Nathan Coates, MSBAi faculty director at the school. “Employers are increasingly looking for knowledge and skills for leveraging GenAI within business processes. Students in our program learn how AI systems work, what their limitations are, and what they can do better than existing solutions.”
The common limitations of using AI for academic work
Kristiina Esop, who is studying a doctorate in Business Administration and Management at Estonian Business School, agrees that AI in education must always be used critically and with intention. She warns students should always be aware of AI’s limitations.
Kristiina currently uses AI tools to explore different scenarios, synthesize large volumes of information, and detect emerging debates—all of which are essential for her work both academically and professionally.
However, she cautions that AI tools are not 100% accurate. Kristiina once asked ChatGPT to map actors in circular economy governance, and it returned a neat, simplified diagram that ignored important aspects. “That felt like a red flag,” she says. “It reminded me that complexity can’t always be flattened into clean logic. If something feels too easy, too certain—that’s when it is probably time to ask better questions.”
To avoid this problem, Kristiina combines the tools with critical thinking and contextual reading, and connects the findings back to the core questions in her research. “I assess the relevance and depth of the sources carefully,” she says. “AI can widen the lens, but I still need to focus it myself.”
She believes such critical thinking when using AI is essential. “Knowing when to question AI-generated outputs, when to dig deeper, and when to disregard a suggestion entirely is what builds intellectual maturity and decision-making capacity,” she says.
This is also what Wharton management professor Ethan Mollick, author of Co Intelligence: Living and Working with AI and co-director of the Generative AI Lab believes. He says the best way to work with [generative AI] is to treat it like a person. “So you’re in this interesting trap,” he says. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.”
Hult International Business School, too, expects its students to use AI in a balanced way, encouraging them to think critically about when and how to use it. For example, Rafael Martínez Quiles, a Master’s in Business Analytics student at Hult, uses AI as a second set of eyes to review his thinking.
“I develop my logic from scratch, then use AI to catch potential issues or suggest improvements,” he explains. “This controlled, feedback-oriented approach strengthens both the final product and my own learning.”
At Hult, students engage with AI to solve complex, real-world challenges as part of the curriculum. “Practical business projects at Hult showed me that AI is only powerful when used with real understanding,” says Rafael. “It doesn’t replace creativity or business acumen, it supports it.”
As vice president of Hult’s AI Society, N-AIble, Rafael has seen this mindset in action. The society’s members explore AI ethically, using it to augment their work, not automate it. “These experiences have made me even more confident and excited about applying AI in the real world,” he says.
The AI learning tools students are using to improve understanding
In other business schools, AI is being used to offer faculty a second pair of hands. Nazarbayev University Graduate School of Business has recently introduced an ‘AI Jockey’. Appearing live on a second screen next to the lecturer’s slides, this AI tool acts as a second teacher, providing real-time clarifications, offering alternate examples, challenging assumptions, and deepening explanations.
“Students gain access to instant, tailored explanations that complement the lecture, enhancing understanding and engagement,” says Dr Tom Vinaimont, assistant professor of finance, Nazarbayev University Graduate School of Business, who uses the AI jockey in his teaching.
Rather than replacing the instructor, the AI enhances the learning experience by adding an interactive, AI-driven layer to traditional teaching, transforming learning into a more dynamic, responsive experience.
“The AI Jockey model encourages students to think critically about information, question the validity of AI outputs, and build essential AI literacy. It helps students not only keep pace with technological change but also prepares them to lead in an AI-integrated world by co-creating knowledge in real time,” says Dr Vinaimont.
How AI can be used to encourage critical thinking among students
So, if you’re looking to impress potential employers, learning to work with AI while a student is a good place to start. But simply using AI tools isn’t enough. You must think critically, solve problems creatively and be aware of AI’s limitations.
Most of all, you must be adaptable. GMAC’s new AI-powered tool, Advancery, helps you find graduate business programs tailored to your career goals, with AI-readiness in mind.
After all, working with AI is a skill in itself. And in 2025, it is a valuable one.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit