AI Research
Optimized Artificial Intelligence Responds to Search Preferences Survey
83% of survey respondents prefer AI search over traditional Googling. LLMO agency, Optimized Artificial Intelligence, calls it the “new default,” not a trend.
(PRUnderground) July 9th, 2025
A new survey reported by “Innovating with AI Magazine” confirms what forward-looking brands have already begun to suspect: 83% of users say they now prefer AI search tools like ChatGPT, Perplexity, and Claude over traditional Googling.(1) For Optimized Artificial Intelligence, a leading AI optimization agency founded by SEO veteran Damon Burton, this marks not a momentary shift but the dawn of a new default in digital behavior.
“This survey isn’t surprising. It’s validating,” said Burton, Founder of Optimized Artificial Intelligence and President of SEO National. “Consumers are clearly signaling that they no longer want to wade through pages of links. They want direct, synthesized answers, and they’re finding them through AI search platforms. That changes the entire playbook for SEO.”
The “Innovating with AI Magazine” report notes that ChatGPT now sees over 200 million weekly active users and that Google’s market share has dipped below 90% for the first time in nearly a decade. Tools like Microsoft’s Copilot, Claude by Anthropic, and Perplexity AI are redefining how information is retrieved and who gets cited.
Brands Can’t Rely on Legacy Search Alone
Optimized Artificial Intelligence has been at the forefront of large language model optimization (LLMO), a strategic evolution of SEO that prepares content not just for ranking on SERPs but for retrieval, citation, and trust in generative AI tools.
“The reality is, most businesses are still optimizing for a search engine that’s disappearing from user behavior,” said Burton. “Google isn’t dying, but it’s being re-prioritized. If your content isn’t LLM optimized by being structured, cited, and semantically relevant, you’re already losing opportunities.”
OAI’s proprietary approach to LLMO, also called generative engine optimization (GEO), includes:
- Entity-first schema structuring
- Semantic content clustering for LLM retrieval
- Platform-specific tuning for ChatGPT, Gemini, Claude, Copilot, Perplexity, and more
- Reputation signal optimization to increase brand inclusion in AI-generated summaries
Why This Matters for the Future of Discovery
The “Innovating with AI Magazine” report also highlights challenges: hallucinations, misinformation, and a lack of third-party visibility. But Burton argues this is precisely why strategy matters now more than ever.
“Hallucinations are a technical challenge, but they’re also a signal. LLMs choose what they cite based on structure, clarity, and trust. If your brand isn’t showing up in AI-generated responses, it’s not because AI search is broken. It’s because your content isn’t optimized for how these models think.”
Call to Action for Forward-Thinking Brands
As Google cannibalizes its own SERPs in favor of AI Overviews and third-party visibility continues to shrink, Burton urges brands to adapt and fast: “This is the end of traditional SEO as we knew it. But it’s the beginning of something better: precision-targeted, AI-friendly optimization that earns trust, not just traffic.”
To learn more about SEO for AI search engines and how to get found and cited across platforms like ChatGPT, Claude, Gemini, Perplexity, and Copilot, visit www.OptimizedArtificialIntelligence.com.
(1) https://innovatingwithai.com/is-ai-search-replacing-traditional-search/
About Optimized Artificial Intelligence
Optimized Artificial Intelligence offers tailored AI solutions designed to enhance business operations and drive growth. Their services include developing custom AI models, automating workflows, and providing data-driven insights to help businesses make informed decisions.
The post Optimized Artificial Intelligence Responds to Search Preferences Survey first appeared on
Original Press Release.
AI Research
Bolt Insight gains SOC 2 Type I for secure AI research platform
Bolt Insight has announced that its AI-powered qualitative research platform, BoltChatAI, has achieved SOC 2 Type I compliance certification.
This certification demonstrates that Bolt Insight has met the five Trust Services Criteria relating to security, availability, processing integrity, confidentiality, and privacy. SOC 2 Type I is a widely recognised framework, developed by the American Institute of Certified Public Accountants (AICPA), aimed at validating that a company’s internal controls and systems protect client data in a reliable and secure manner.
Data protection
The process to achieve SOC 2 Type I involved a comprehensive audit of BoltChatAI’s underlying infrastructure, operational policies, and day-to-day processes. Independent auditors assessed every layer of the platform to ensure that controls were suitably designed and implemented to protect data. The certification covers not only technical protections but also reviews the governance and transparency of the company’s procedures with regards to privacy and confidentiality.
Over 100 brands globally use BoltChatAI, including organisations such as Reckitt, L’Oreal, and Unilever, relying on it for qualitative research that captures insights from sources such as video interviews and chat transcripts. The new certification offers these clients formal assurance of the company’s data security practices.
Commitment to trust
Bolt Insight’s CEO and Co-Founder, Hakan Yurdakul, made clear the company’s approach to balancing the pace of technological adoption with responsible practices:
We’ve always said trust is the foundation of great research and that applies just as much to how we treat data as it does to how we speak to people. This certification shows that we’re not only moving fast, but we’re doing it responsibly and transparently.
The SOC 2 Type I certification, he stated, affirms not only the speed of development but also the company’s commitment to security and openness.
Ethical framework
Bolt Insight’s ethical protocol extends beyond technical and operational compliance. Every aspect of the platform is designed to conform to ethical standards around research participation and data use. The company’s policies include obtaining informed consent from all research participants, providing transparency about how artificial intelligence is used in studies, and a firm stance that client data is never utilised for training AI models.
According to the company, these measures are designed to reassure customers and partners that their information is kept confidential and secure, and that the data generated by their research activity remains under their control at all times.
Platform capabilities
BoltChatAI’s core functionality includes AI-driven moderation to facilitate real-time qualitative research at scale. It is equipped with smart probing techniques, stimulus upload features, multilingual support, and can conduct meta-analysis to identify trends across studies. Its Dynamic Personas feature enables profiles to develop and evolve as more data points are accumulated, supporting more strategic insights for users.
The company also offers BoltQ, a quantitative survey platform, to complement the qualitative focus of BoltChatAI, providing clients with a broader set of tools for consumer research and insight generation.
Bolt Insight has stated that as research methodologies continue to digitise and expand globally, SOC 2 Type I compliance strengthens its proposition by assuring clients of secure, scalable, and ethical operations. The certification is intended to support both ongoing client relationships and future growth by demonstrating accountability in data handling and privacy management practices.
AI Research
KARE 11 – YouTube
AI Research
Accelerating discovery: The NVIDIA H200 and the transformation of university research
The global research landscape is undergoing a seismic shift. Universities worldwide are deploying NVIDIA’s H200 Tensor Core GPUs to power next-generation AI Factories, SuperPODs, and sovereign cloud platforms. This isn’t a theoretical pivot; it’s a real-time transformation redefining what’s possible in scientific discovery, medicine, climate analysis, and advanced education delivery.
The H200 is the most powerful GPU currently available to academia, delivering the performance required to train foundational models, run real-time inference at scale, and enable collaborative AI research across institutions. And with NVIDIA’s Blackwell-based B200 on the horizon, universities investing in H200 infrastructure today are setting themselves up to seamlessly adopt future architectures tomorrow.
Universities powering the AI revolution
This pivotal shift isn’t a future promise but a present reality. Forward-thinking institutions worldwide are already integrating the H200 into their research ecosystems.
Institutions leading the charge include:
- Oregon State University and Georgia Tech in the US, deploying DGX H200 and HGX clusters.
- Taiwan’s NYCU and University of Tokyo, pushing high-performance computing boundaries with DGX and GH200-powered systems.
- Seoul National University, gaining access to a GPU network of over 4,000 H200 units.
- Eindhoven University of Technology in the Netherlands, preparing to adopt DGX B200 infrastructure.
In Taiwan, national programs like NCHC are also investing in HGX H200 supercomputing capacity, making cutting-edge AI infrastructure accessible to researchers at scale.
Closer to home, La Trobe University is the first in Australia to deploy NVIDIA DGX H200 systems. This investment underpins the creation of ACAMI — the Australian Centre for Artificial Intelligence in Medical Innovation — a world-first initiative focused on AI-powered immunotherapies, med-tech, and cancer vaccine development.
It’s a leap that’s not only bolstering research output and commercial partnerships but also positioning La Trobe as a national leader in AI education and responsible deployment.
Universities like La Trobe are establishing themselves as part of a growing global network of AI research precincts, from Princeton’s open generative AI initiative to Denmark’s national AI supercomputer, Gefion. The question for others is no longer “if”, but “how fast?”
Redefining the campus: How H200 AI infrastructure transforms every discipline
The H200 isn’t just for computer science. Its power is unlocking breakthroughs across:
- Climate science: hyper-accurate modelling for mitigation and prediction
- Medical research: from genomics to diagnostics to drug discovery
- Engineering and material sciences: AI-optimised simulations at massive scale
- Law and digital ethics: advancing policy frameworks for responsible AI use
- Indigenous language preservation: advanced linguistic analysis and voice synthesis
- Adaptive education: AI-driven, personalised learning pathways
- Economic modelling: dynamic forecasts and decision support
- Civic AI: real-time, data-informed public service improvements
AI infrastructure is now central to the entire university mission — from discovery and education to innovation and societal impact.
Positioning Australia in the global AI race
La Trobe’s deployment is more than a research milestone — it supports the national imperative to build sovereign AI capability. Australian companies like Sharon AI and ResetData are also deploying sovereign H200 superclusters, now accessible to universities via cloud or direct partnerships.
Universities that move early unlock more than infrastructure. They strengthen research impact, gain eligibility for key AI grants, and help shape Australia’s leadership on the global AI stage.
NEXTDC indispensable role: The foundation for AI innovation
Behind many of these deployments is NEXTDC, Australia’s data centre leader and enabler of sovereign, scalable, and sustainable AI infrastructure.
NEXTDC is already:
- Hosting Sharon AI’s H200 supercluster in Melbourne in a high-density, DGX-certified, liquid-cooled facility
- Delivering ultra-low latency connectivity via the AXON fabric — essential for orchestrating federated learning, distributed training, and multi-institutional research
- Offering rack-ready infrastructure for up to 600kW+, with liquid and immersion cooling on the roadmap
- Enabling cross-border collaboration with facilities across every Australian capital and proximity to international subsea cable landings
The Cost of inaction: why delay is not an option in the AI race
The global AI race is accelerating fast, and for university leaders, the risk of falling behind is real and immediate. Hesitation in deploying advanced AI infrastructure could lead to lasting disadvantages across five critical areas:
- Grant competitiveness: Top-tier research funding increasingly requires access to state-of-the-art AI compute platforms.
- Research rankings: Leading publication output and global standing rely on infrastructure that enables high-throughput, data-intensive AI research.
- Talent attraction: Students want practical experience with cutting-edge tools. Institutions that can’t provide this will struggle to attract top talent.
- Faculty recruitment: The best AI researchers will favour universities with robust infrastructure that supports their work.
- Innovation and commercialisation: Without high-performance GPUs, universities risk slowing their ability to generate start-ups, patents, and economic returns.
Global counterparts are already deploying H100/H200 infrastructure and launching sovereign AI programs. The infrastructure gap is widening fast.
Now is the time to act—lead, don’t lag.
The universities that invest today won’t just stay competitive. They’ll define the future of AI research and discovery.
NEXTDC
What this means for your institution
For Chancellors, Deans, CTOs and CDOs, the message is clear: the global AI race is accelerating. Delay means risking:
- Lower grant competitiveness
- Declining global research rankings
- Talent loss among students and faculty
- Missed innovation and commercialisation opportunities
The infrastructure gap is widening — and it won’t wait.
Ready to lead?
The universities that act now will shape the future. Whether it’s training trillion-parameter LLMs, powering breakthrough medical research, or leading sovereign AI initiatives, H200-grade infrastructure is the foundation.
NEXTDC is here to help you build it.
Want to explore the full article?
Read the complete breakdown of the H200-powered university revolution and how NEXTDC is enabling it: Click here.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children