Connect with us

AI Research

The Future of Market Research and Strategy: AI, Big Data & Beyond | nasscom

Published

on


In today’s fast-changing business world, accurate market research and strong strategies are significant. Consumer priorities are changing rapidly, digital changes are again reforming industries, and competition is really high. Organisations are moving to artificial intelligence (AI), big data, and advanced analysis to understand consumer behaviour research, predict market trends, and design future strategies. Market research’s future lies in technology combined with human expertise to generate smart, faster, and more actionable insights.

AI in Market Research

Artificial intelligence revolutionises the way businesses conduct research. Traditional research conducting methods like surveys and focus groups are now complemented by AI-driven tools. Natural language processing (NLP) and sentiment analysis can scan millions of social media posts, online reviews, and customer feedback in real-time sentiment gauges.

The AI-operated chatbot collects qualitative data at scale, while predictive analytics analysis allows organisations to predict requirements and customer preferences. It reduces costs, saves time, and produces very accurate results. Many market research consulting firms already use AI technologies to offer customers deep insight and competitive management in decision-making.

Unlocking Consumer Behaviour

Data is new currency, and businesses are leveraging big data to make the most out of it. From browser history and purchasing records for geolocation and IoT data, companies now have access to the latest versions of information. Big data tools clean, process, and analyze this data to highlight the patterns and trends that were once hidden.

For example, retailers can estimate the demand for regional products by combining weather data with purchase history. Similarly, streaming services depend on large data for users to recommend personalized content. The future power of large data makes sure that businesses not only understand today’s consumer behaviour, but can also predict future functions with great accuracy.

The Perfect Balance between Human and Machine 

While AI and big data are powerful, human elements are important. Machines can highlight “what” and “how”, but humans give reference to “why”. Emotional intelligence, cultural awareness, and moral ideas require human interpretation.

The future of market research will depend on the hybrid model where AI handles data analysis of large and itself. At the same time, researchers and strategists combine this insight with human motivations and values. This balance will help companies craft data-informed strategies and emotional resonance strategies. Companies that offer strategic consulting services will play an important role in helping organisations mix technical insights with human-centric strategies.

Ethics and Privacy in Data-Driven Research

When companies collect more consumer data, the concerns of privacy and ethics become central. Rules such as GDPR and CCPA now require strict data management and use compliance. Consumers also expect transparency in how their data is collected and used.

The future of market surveys will emphasise responsible practices; transparency, consent, and trust-building will be non-negotiable. Companies that prioritise ethical research practices will comply with the legal framework and receive consumer loyalty.

Emerging Technologies on the Horizon

Apart from AI and Big Data, many new technologies will reshape market research:

  • Adopted and Virtual Reality (AR/VR): Simulating product experiences before launch.
  • Blockchain: Provides transparency and authenticity in data collection.
  • IoT (Internet of Things): Continuous real-world data through connected devices.
  • Voice analysis: Extracting insight from voice interaction with smart devices.

Strategy in the Age of Intelligent Insights

Future strategies will go beyond static annual plans. Instead, companies will use dynamic strategies shaped by real-time data. The AI-operated landscape will allow us to model outfits to prepare for several potential futures.

In addition, personalized product design, management chains, and customer service will expand beyond marketing. Instead of one-size-fits-all, the business will use adaptive strategies that accurately meet the requirements for different customer groups.

Conclusion

Technology and spontaneous integration of human insight will define the future of market research and strategy. AI and Big Data will continue to provide fast, more future insights, while new tools like AR, IoT, and Blockchain will enrich the research ecosystem. Yet, human touch, creativity, and ethical decisions are irreplaceable.

Companies that embrace this hybrid approach will understand what consumers want and predict their future needs. Companies can create agile, consumer-centric, and future-proof strategies by combining technology, data, and human expertise.

 

 

 

 

 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Medical Horizons and Bowhead Health Inc. Announce Exclusive Partnership to Bring AI-Powered Clinical Research Solutions to Italy, Turkey, and Cyprus

Published

on


FLORENCE, Italy, Sept. 2, 2025 /PRNewswire/ — Medical Horizons S.r.l., (medicalhorizons.it) a leading distributor of Artificial Intelligence (AI) solutions for healthcare, today announced an exclusive distribution agreement with Bowhead Health Inc.(bowheadhealth.com), a Canadian innovator in secure health data management and AI-powered clinical trial matching.

Under this agreement, Medical Horizons becomes the exclusive partner for Bowhead Health in Italy, Turkey, and Cyprus, expanding access to advanced technologies that improve clinical trial recruitment, optimize research workflows, and strengthen hospital and research institute capabilities across the region.

Addressing Healthcare’s Urgent Needs
Healthcare systems worldwide face growing challenges from workforce shortages and rising clinical demands. Artificial intelligence is increasingly recognized as a critical tool to help address these pressures, enabling hospitals and researchers to deliver faster, more personalized care.

“Manual clinical trial matching is slow, burdensome, and often misses the genomic details that matter most,” said Francisco Diaz-Mitoma, CEO of Bowhead Health Inc. “Our platform allows hospitals to scan global and local trial databases instantly, helping them connect patients with the right therapies far more efficiently.”

Bowhead’s AI-driven technology reduces time spent on manual searches, simplifies workflows, and provides confidence for both researchers and patients, accelerating progress toward personalized medicine.

A Strategic Expansion for Medical Horizons
For Medical Horizons, the partnership marks a continuation of its mission to bring best-in-class AI technologies to European healthcare providers.

“This collaboration represents a decisive step in our strategy to deliver practical, high-impact AI solutions,” said Guido Osti, CEO of Medical Horizons. “Bowhead Health has developed a unique platform that combines secure health data management, artificial intelligence, and clinical research. We are proud to guide their expansion in Italy, Turkey, and Cyprus.”

Bowhead Health Inc.
Based in Ottawa, Canada, Bowhead Health has developed a secure digital ecosystem that integrates:

  • An AI-powered trial matching engine for personalized patient recruitment.

  • A de-identified health data platform compliant with GDPR, HIPAA, and global security standards.

  • Collaborative digital flows connecting patients, hospitals, researchers, and pharmaceutical companies.

Bowhead Health is currently validating its technology with leading hospitals in Canada, Europe, India, and the United States, with strong early results.



Source link

Continue Reading

AI Research

To scale AI and bring Zero Trust security, look to the chips

Published

on


Enabling secure and scalable artificial intelligence for Defense Department missions depends on deploying the right semiconductors across the AI lifecycle, from data sourcing and model training to deployment and real-time inferencing.

Enabling secure and scalable artificial intelligence architectures for Defense Department and public sector missions depends on deploying the right compute technologies across the entire AI lifecycle from data sourcing and model training to deployment and real-time inferencing – in other words, drawing conclusions. 

At the same time, securing the AI pipeline can be accomplished through features like hardware-based semiconductor security such as confidential computing to provide a trusted foundation. This enables Zero Trust principles to be applied across both information technology (IT) and operational technology (OT) environments, with OT having different security needs and constraints compared to traditional enterprise IT systems. Recent DoD guidance on Zero Trust specifically addresses OT systems such as industrial control systems that have become attack vectors for adversaries.

Breaking Defense discussed the diverse roles that chips play across AI and Zero Trust implementations with Steve Orrin, Federal Security Research Director and a Senior Principal Engineer with Intel. 

Steve Orrin, Federal Security Research Director and a Senior Principal Engineer with Intel.

Breaking Defense: In this conversation we’re going to be talking about chip innovation for public sector mission impact. So what does that mean to you?

Orrin: The way to think about chip innovation for public sector is understanding that public sector writ large is almost a macro of the broader private sector industries. Across the federal government and public sector ecosystem, with some exceptions, you’ll find almost every kind of use case with much of the same usages and requirements that you find across multiple industries in the private sector: logistics and supply chain management, facilities operations and manufacturing, healthcare, and finance. 

When we talk about chip innovation specific for the public sector, it’s this notion of taking private sector technology solutions and capabilities and federalizing them for the specific needs of the US government. There’s a lot of interplay there and, similarly, when we develop technologies for the public sector and for federal missions, oftentimes you find opportunities for commercializing those technologies to address a broader industry requirement. 

With that as the baseline, we look at their requirements and whether there’s scalability of IT systems and infrastructure to support agencies in helping them achieve their goals around enabling the end user to perform their job or mission. In the DoD and specific industries, oftentimes they’ll have a higher security bar, and in the Defense Department there’s an edge component to their mission.

Being able to take enterprise-level capabilities and move them into edge and theater operations where you don’t necessarily have large-scale cloud infrastructure or other network access means you have to be more self-contained, more mobile. It’s about innovations that address specific mission needs. 

One of the benefits of being Intel is that our chips are inside the cloud, the enterprise data center, the client systems, the edge processing nodes. We exist across that entire ecosystem, including network and wireless domains. We can bring the best of what’s being done at those various areas and apply them to specific mission requirements. 

We also see a holistic view of cloud, on-prem, end-user, and edge requirements. We can look at the problem sets that they’re having from a more expansive approach as opposed to a stove pipe that’s looking just at desktop and laptop use cases or just at cloud applications.

This holistic view of the requirements enables us to help the government adopt the right technology for their mission. That comes to the heart of what we do. What the government needs is never one-size-fits-all when it comes to solving public-sector requirements. 

It’s helping them achieve the right size, weight, and power profile, the right security requirements, and the right mission enabling and environmental requirements to meet their mission needs where they are, whether that be cloud utilization or at the pointy edge of the spear.

Zero Trust policies, controls, and technologies should be crafted to meet the requirements of the mission and the enterprise IT and OT technologies involved. (Image courtesy of Intel)

What’s required to enable secure, scalable AI architectures that advance technology solutions for national security?

From an Intel perspective, scalable AI is being able to go both horizontally and vertically to have the right kind of computing architecture for every stage of the lifecycle, from the training to the tuning, deployment, and inferencing. There are going to be different requirements from both SWaP and horsepower of the actual AI workload that’s performing. 

Oftentimes you’ll find that it’s not the AI training, which everyone focuses on because it feels like the big problem, because that’s the tip of the iceberg. When you look at the challenge, sometimes it’s around data latency or input ingestion speeds. How do I get all of this data into the systems? 

Maybe it’s doing federated learning because there’s too much data to put it all in one place and it’s all from different sensors. There’s actually benefits to pushing that compute closer to where the data is being generated and doing federated learning out at the edge. 

At the heart of why Intel is a key player in this is understanding that it’s not a one size fits all approach from a compute perspective but providing that right compute to the needs of the various places in the horizontal scale. 

At the same time there’s the vertical scale, which is the need to do massive large language model training, or inference across thousands of sensors, and fusion of data across multimodal sensors that are capturing different kinds of data such as video, audio, and RF spectrum in order to get an accurate picture of what’s being seen across logistics and supply chains, for example.

I need to pull in location data of where supplies are across vendor supply chains. I need to be able to pull in information from my project management demand signal to understand what’s needed where, and from mission platforms like planes, vehicles, weapons systems, radar stations, and sensor technologies to know where I’m deploying people. Those are different kinds of data sets and structures that have to be fused together in order to enable supply chain and logistics management at scale. 

Being able to scale up computing power to meet the needs of those various parts is about how we’re providing the right architecture for those different parts of the ecosystem and AI pipeline. 

Intel is helping defense and intelligence agencies adopt AI in ways that are secure, scalable, and aligned with Zero Trust principles, especially in operational technology environments as opposed to IT environments. Explain.

Operational technology has been around for a long time and is distinct from what is known as information technology or enterprise systems, where you have enterprise email and your classic collaboration and document management. 

OT are the things that are not that – everything from fire suppression and alerting systems, HVAC, the robots and machines and error detection technologies that do quality control. Those are the operational technologies that perform the various task specific functions that support the operations and mission of an organization, they are not your classic IT operations. 

One of the interesting transitions over the last many years is that the actual kinds of technology in those OT environments now look and feel a lot like IT. It’s a set of servers or client systems that are performing a fixed function, but the vendors are still your classic laptop and PC OEMs. 

That mixing of the IT-style equipment in OT environments has created a tension point over the years when it comes to things like management and how you secure OT systems versus IT because OT systems are more mission critical. They’re more fixed-function and they often don’t have the space and luxury of having heaps of security tools monitoring them and performing because you have real-time reliability requirements like guaranteed uptime. 

The DoD is coming out with new Zero Trust guidance specifically for OT, and the reason is because IT Zero Trust principles don’t easily translate to OT environments. There’s different constraints and limitations in OT, as well as some higher-level requirements, so there needs to be an understanding that there is a difference between the two when it comes to applying Zero Trust.

What do you suggest?

One of the first steps that I’ve talked about is getting the right people in the room for those initial phases of policy definition and architectural planning. Oftentimes you’ll find, and we’ve seen this a lot in the private sector, that when they start looking at OT, the IT people come up with security policy and force it on the OT systems. More often than not that fails miserably because OT just isn’t like IT. You don’t have the same flexibility and you have more stringent requirements for the actual operations side of OT. 

That calls for crafting subset policies for that system and then containerizing that from a segmentation or a policy perspective and monitoring against that. The nice thing about OT is you don’t have to worry about every possible scenario. If you take the example of a laptop, users can do almost anything on their laptop. They can browse the Internet, send email, work with documents, collaborate on Teams calls. That means there’s a lot of security I have to worry about across the myriad usages enabled by that PC. 

In an OT environment, you have a much smaller set of what it’s supposed to be doing, which means you can lock down that system and the access to that system to just key functions. That gives you a much tighter policy you can apply in OT that you wouldn’t have the availability of doing on the IT side of the camp. That way you can craft very specific policies, monitoring, and access controls specific to that particular OT or mission platform. That is a powerful way of applying it. 

If you look at some of the guidance that’s coming out, the Navy has just recently published some specific OT guidance, NIST is coming out with OT guidance. It’s about tying the policies to the environment and being able to craft a subset of security controls specific to the domain, and then leveraging the right technologies that you need in order to achieve that goal.

Final thoughts? 

Intel has technology and architectures that provide the right compute at the right place where and when the customer needs it. We understand the vertical and horizontal scale requirements, and provide the security, reliability, and performance for those environments that you need across your mission areas. 

Second, when applying Zero Trust, it’s not one size fits all. You need to craft your Zero Trust policies, controls, and technologies to meet the requirements of your mission and of your enterprise IT and OT technologies.

Then, much of the technology and the security capabilities you need are already built in the system. You just need to take advantage of them, whether they be network segmentation, secure boot, and confidential computing. The hardware and software that has often already been deployed gives you a lot of those capabilities. You just need to leverage them. 

To learn more about Intel and AI visit www.intel.com/usai.



Source link

Continue Reading

AI Research

Artificial intelligence loses out to humans in credibility during corporate crisis responses

Published

on


As artificial intelligence tools become increasingly integrated into public relations workflows, many organizations are considering whether these technologies can handle high-stakes communication tasks such as crisis response. A new study published in Corporate Communications: An International Journal provides evidence that, at least for now, human-written crisis messages are perceived as more credible and reputationally beneficial than those authored by AI systems.

The rise of generative artificial intelligence has raised questions about its suitability for replacing human labor in communication roles. In public relations, AI tools are already used for media monitoring, message personalization, and social media management. Some advocates even suggest that AI could eventually author press releases or crisis response messages.

However, prior studies have found that people often view AI-generated messages with suspicion. Despite improvements in the sophistication of these systems, the use of AI can still reduce perceptions of warmth, trustworthiness, and competence. Given the importance of credibility and trust in public relations, especially during crises, the study aimed to evaluate how the perceived source of a message—human or AI—affects how people interpret crisis responses.

The researchers also wanted to assess whether the tone or strategy of a message—whether sympathetic, apologetic, or informational—would influence perceptions. Drawing on situational crisis communication theory, they hypothesized that more accommodating responses might boost credibility and protect organizational reputation, regardless of the source.

“Our interest in understanding how people judge the credibility of AI-generated text grew out of a graduate class Ayman Alhammad (the lead author) took with Cameron Piercy (the third author). They talked about research about trust in AI during the class and the questions we posed in our study naturally grew out of that,” said study author Christopher Etheridge, an assistant professor in the William Allen White School of Journalism and Mass Communications at the University of Kansas.

To explore these questions, the researchers designed a controlled experiment using a hypothetical crisis scenario. Participants were told about a fictitious company called the Chunky Chocolate Company, which was facing backlash after a batch of its chocolate bars reportedly caused consumers to become ill. According to the scenario, the company had investigated the incident and determined that the problem was due to product tampering by an employee.

Participants were then shown one of six possible press releases responding to the crisis. The releases varied in two key ways. First, they were attributed either to a human spokesperson (“Chris Smith”) or to an AI system explicitly labeled as such. Second, the tone of the message followed one of three common strategies: informational (providing details about the incident), sympathetic (expressing empathy for affected customers), or apologetic (taking responsibility and issuing an apology).

The wording of the messages was carefully controlled to ensure consistency across versions, with only the source and emotional tone changing between conditions. After reading the message, participants were asked to rate the perceived credibility of the author, the credibility of the message, and the overall reputation of the company. These ratings were made using standardized scales based on prior research.

The sample included 447 students enrolled in journalism and communication courses at a public university in the Midwestern United States. These participants were chosen because of their familiarity with media content and their relevance as potential future professionals or informed consumers of public relations material. Their average age was just over 20 years old, and most participants identified as white and either full- or part-time employed.

The results provided clear support for the idea that human authors are still viewed as more credible than AI. Across all three key outcomes—source credibility, message credibility, and organizational reputation—participants rated human-written messages higher than identical messages attributed to AI.

“It’s not surprising, given discussions that are taking place, that people found AI-generated content to be less credible,” Etheridge told PsyPost. “Still, capturing the data and showing it in an experiment like this one is valuable as the landscape of AI is ever-changing.”

Participants who read press releases from a human author gave an average source credibility rating of 4.40 on a 7-point scale, compared to 4.11 for the AI author. Message credibility followed a similar pattern, with human-authored messages receiving an average score of 4.82, compared to 4.38 for AI-authored versions. Finally, organizational reputation was also judged to be higher when the company’s message came from a human, with ratings averaging 4.84 versus 4.49.

These differences, while not massive, were statistically significant and suggest that the mere presence of an AI label can diminish trust in a message. Importantly, the content of the message was identical across the human and AI conditions. The only change was who—or what—was said to have authored it.

In contrast, the tone or strategy of the message (apologetic, sympathetic, or informational) did not significantly influence any of the credibility or reputation ratings. Participants did perceive the tone differences when asked directly, meaning the manipulations were effective. But these differences did not translate into significantly different impressions of the author, message, or company. Even though past research has emphasized the importance of an apologetic or sympathetic tone during a crisis, this study found that source effects had a stronger influence on audience perceptions.

“People are generally still pretty weary of AI-generated messages,” Etheridge explained. “They don’t find them as credible as human-written content. In our case, news releases written by humans are more favorably viewed by readers than those written by AI. For people who are concerned about AI replacing jobs, that could be welcome news. We caution Public Relations agencies against over-use of AI, as it could hurt their reputation with the public when public reputation is a crucial measure of the industry.”

But as with all research, there are some caveats to consider. The study relied on a fictional company and crisis scenario, which might not fully capture real-world reactions, and participants—primarily university students—may not represent broader public attitudes due to their greater familiarity with AI. Additionally, while the study clearly labeled the message as AI-generated, real-world news releases often lack such transparency, raising questions about how audiences interpret content when authorship is ambiguous.

“We measured credibility and organizational reputation but didn’t really look at other important variables like trust or message retention,” Etheridge said. “We also may have been more transparent about our AI-generated content than a professional public relations outlet might be, but that allowed us to clearly measure responses. Dr. Alhammad is leading where our research effort might go from here. We have talked about a few ideas, but nothing solid has formed as of yet.”

The study, “Credibility and organizational reputation perceptions of news releases produced by artificial intelligence,” was authored by Ayman Alhammad, Christopher Etheridge, and Cameron W. Piercy.



Source link

Continue Reading

Trending