Tools & Platforms
Major Threat or Just the Next Tech Thing?
Story Highlights
- U.S. adults divided over whether AI poses a novel technology threat
- Majority do foresee AI taking important tasks away from humans
- Most say they will avoid embracing AI as long as possible
WASHINGTON, D.C. — As artificial intelligence transitions from abstraction to reality, U.S. adults are evenly divided on its implications for humankind. Forty-nine percent say AI is “just the latest in a long line of technological advancements that humans will learn to use to improve their lives and society,” while an equal proportion say it is “very different from the technological advancements that came before, and threatens to harm humans and society.”
Despite this split assessment, a clear majority (59%) say AI will reduce the need for humans to perform important or creative tasks, while just 38% believe it will mostly handle mundane tasks, freeing humans to do higher-impact work.
And perhaps reflecting AI’s potential to diminish human contributions, 64% plan to resist using it in their own lives for as long as possible rather than quickly embracing it (35%).
###Embeddable###
Majorities Expect AI to Eclipse the Telephone, Internet in Changing Society
Americans may not be convinced that AI poses a threat to humanity, but majorities foresee it having a bigger impact on society than did several major technological advancements of the past century.
Two-thirds (66%) say AI will surpass robotics in societal influence, and more than half say it will exceed the impact of the internet (56%), the computer (57%) and the smartphone (59%). Just over half (52%) think AI will have more impact than the telephone did when it was introduced.
###Embeddable###
Familiarity Breeds Comfort?
Americans’ perceptions of the impact AI will have on society don’t differ much by gender, age or other characteristics. Most demographic groups are closely split over whether AI is just the next technological thing versus a novel threat. But attitudes vary significantly by people’s exposure to AI.
Seventy-one percent of daily users of generative AI (programs like ChatGPT and Microsoft Copilot that can create new content, such as text, images and music) say AI is just another technological advancement. By contrast, only 35% of those who never use generative AI agree.
This 36-percentage-point gap contrasts with smaller differences between users and nonusers of other AI applications in confidence that AI can be harnessed for good. There is a 27-point difference between users and nonusers of virtual assistants (like Amazon Alexa and Apple Siri) in their view that AI will benefit humans. And there are roughly 20-point differences in this endorsement of AI between users and nonusers of personalized content (such as apps that make movie and product recommendations) and smart devices (like robotic vacuums and fitness trackers).
###Embeddable###
Personalized Content Now Routine; Generative AI Still Novel
ChatGPT reportedly became the fastest-growing app ever, after it was launched publicly in November 2022. However, adoption of generative AI, generally, among U.S. adults is still sparse relative to other types of AI. Less than a third of U.S. adults currently report using generative AI tools either daily or weekly. About a quarter use them less frequently than that, while 41% don’t use them at all.
At the same time, more than four in 10 adults say they use voice recognition/virtual assistants (45%) or smart devices (41%) at least weekly. And nearly two-thirds (65%) report frequent use of personalized content.
###Embeddable###
Demographic Gaps Greatest for Generative AI Adoption
The broad adoption of personalized content is reflected in the relative uniformity of its use across demographic groups. The same is true for virtual assistants and smart devices, except that — possibly reflecting their expense — the use of smart devices is greater among upper- than middle- and lower-income groups and, relatedly, among college-educated and employed adults. Smart devices are also the one technology used more often by women (44%) than men (37%).
On the other hand, there are sizable differences by age, education, employment and gender in the use of generative AI.
- The rate of using generative AI daily or weekly is highest among 18- to 29-year-olds (43%) and lowest among seniors (19%).
- There is an eight-point difference by gender, with more men (36%) than women (28%) using it. However, the gender gap is greater among adults 50 and older than among those 18 to 49.
- Employed adults (37%) are nearly twice as likely as nonworking adults (20%) to be regularly using generative AI.
###Embeddable###
Bottom Line
While Americans are split over whether AI is a routine step in the evolution of technology or a unique threat, most expect it to diminish the need for human creativity and are hesitant to fully adopt it personally. For now, positive views of AI are closely linked with people’s experience with it, rather than their personal demographics. The implication is that as usage expands, acceptance may follow.
Stay up to date with the latest insights by following @Gallup on X and on Instagram.
Learn more about how the Gallup Panel works.
###Embeddable###
Tools & Platforms
Murky AI Rules Leave Enterprises Grappling With Self-Regulation
Thanks to the ever-shifting nature of AI, regulating the tech can be a moving target.
The looming effective date for some EU AI Act rules offers a case in point. European tech firms have rallied in opposition, with a group of more than 45 companies — including Mistral AI, Airbus and ASML — writing a letter to the president of the European Commission, Ursula von der Leyen, calling for the postponement of rules that would rein in models in favor of a more “innovation-friendly regulatory approach.”
The signatories, calling themselves the EU AI Champions Initiative, claim that Europe’s competitiveness in the race for AI innovation will be “disrupted” by unclear, overlapping and increasingly complex regulations in the region:
- “This puts Europe’s AI ambitions at risk, as it jeopardizes not only the development of European champions, but also the ability of all industries to deploy AI at the scale required by global competition,” the letter claims.
- The EU AI act was passed last year, with the goal of curbing risks posed by AI. Giants like OpenAI must be in compliance by August or face fines.
Despite the fear that regulation will stymie innovation, rules like these simply aim to ensure that the technology is being developed in a way that won’t cause more harm than good, said Bill Wong, AI research fellow at Info-Tech Research Group. “It’s possible to innovate and still keep safety in mind,” said Wong.
The EU is often a trendsetter in regulation, he added. Many governments followed the lead of the region’s General Data Protection Regulation, whose groundbreaking privacy rules went into effect in 2018, he noted. AI regulation may follow a similar path – though likely not in “innovation-driven and very legislation-light” regions like the US, he noted.
With tech that’s constantly evolving, regulation can be a tricky beast, said Wong, especially as many regulators operate “at a snail’s pace.” Instead of regulating the tech itself, “most folks around the world are going to measure the risk by the context,” he said.
And demand for this regulation is high among enterprises and IT leaders, said Wong. Many companies don’t trust the handful of Big Tech vendors that pull the strings of the most powerful AI models to properly manage the risks, he said.
The only option that enterprises have as it stands is to self-regulate, Wong said. Adopting risk management frameworks, like those released by the National Institute of Standards and Technology, International Organization for Standardization, and the Organisation for Economic Cooperation and Development, could provide companies with at least some protection as regulation develops.
Without risk management, whether an enterprise is beholden to regulation or not, the tech presents major risks for both the companies and the users putting it into practice, he said.
“I think everybody would agree: It’s a free-for-all,” said Wong. “It’s the Wild West right now.”
Tools & Platforms
Creating a Secure, Private and Safe Autonomous Future with Quantum Computing and AI Technologies
ByteSafe’s CTO Raghavan Chellappan offers commentary on creating a secure, private, and safe autonomous future with quantum computing and AI. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.
Quantum computing is changing classical architecture, information processing and security frameworks, and offers a path to address key security, privacy, and safety issues in autonomous systems. Emerging technologies such as generative AI (GenAI), 5G expansion and 6G transitions, Augmented and Virtual Reality (AR/VR), Internet of Things (IoT), Blockchain, and Edge Computing, are proliferating at a fast pace, disrupting and revolutionizing whole industries and fundamentally shifting the digital landscape.
Unlike in earlier innovation cycles, these technologies are accelerating and maturing in parallel. Such an evolutionary pattern coupled with the increasing intersection between the technologies poses significant threats, vulnerability risks, ethical concerns and regulatory challenges to existing applications and growing connected autonomous systems.
Such complexity makes it harder to manage, secure and safeguard the flow of data across distributed environments (cloud, on-premise or hybrid). It’s worth recognizing that quantum technologies can be used in a variety of ways from information processing to data encryption, and if incorrectly implemented, can even facilitate data breaches. As a result, how quantum approaches are used matters.
One-off, non-integrated or silver bullet solutions will not resolve problems as businesses adopt technological advancements. Instead, the solution set should start with a shift in the mindset of how humans engage and collaborate with autonomous systems securely and safely as they adopt and transition into an agentic-driven digital way of life.
In a human-centric approach, users and enterprises go beyond relying on secure, anonymized connections to proactive enhanced data security and protection based on a decentralized, open security framework.
Key Elements in the Transition to Quantum Computing
Modern architecture embraces greater levels of autonomy to address industry needs for greater productivity. How enterprises design, develop and deploy software applications is changing—moving towards a greater integration of autonomous systems and converging technologies supporting new methods of architectural design. To understand how this works it’s useful to look at the shift from classical to quantum computing.
It is useful to note that regardless of whether they operate in classical or quantum computing environments, autonomous systems need encryption, security, privacy and safety requirements to ensure data are protected.
Architecture and Information Processing
While classical and quantum computing are based on different architectures and process information differently, they also share some common elements.
Classical computing encodes, processes and stores information in bits. A classical bit uses a base 2 numbering system and can only exclusively be in one of two states, as a ‘0’ or ‘1’ akin to flipping a coin (heads = ‘0’ or tails = ‘1’ or vice versa). These two values exist in 2D or two dimensions only and measurements are deterministic in nature.
Classical computers follow sequential operations (i.e., passing one instruction at a time) by applying Boolean algebraic principles—based on binary variables and logical operations (or logical gates)—to process bits (‘0s’ = Off/Fail and ‘1s’ = On/Pass), and manipulate and transform the information depending on a desired calculation (inputs, processing, outputs) and presenting a string of bits as the output.
Interest in “quantum computation” has grown substantially in recent years. Quantum architecture consists of quantum circuits, quantum bits (Qubits), and quantum gates on which all operations are performed. Quantum computing is based on quantum mechanics (including the principles of superposition and entanglement), so even though qubits still rely on “0s” and “1s” a single qubit can be in one of infinitely many superpositions of |0⟩ and |1⟩ states.
Thus, these values can be visualized in “3D” or three dimensions, and quantum qubit measurements are probabilistic in nature rather than deterministic. Quantum computing, with additional layers required to process information, is consequently more complicated compared to classical computing. This complexity in encoding, processing and storing information in qubits makes quantum information more prone to errors, which in turn reduces stability in quantum systems and makes it harder to manipulate compared to classical information.
It’s worth remembering that while qubits are primarily used in processing information, the output still needs to be presented in terms of the binary ‘0s’ and ‘1s’ of classical computing.
Cryptography and Encryption
The cryptographic algorithm (e.g., symmetric or asymmetric) is one of the most basic controls available to protect sensitive information from unauthorized disclosure in many different environments including autonomous systems through encryption. Cryptography uses mathematical algorithms in the transport layer security (TLS) and secure socket shell environments to transform information into a form that’s not readable by unauthorized individuals yet provides authorized individuals with the ability to transform that information back into readable forms again by using a decryption algorithm.
Classical computing uses traditional encryption techniques—such as Rivest-Shamir-Adleman (RSA), Elliptic Curve Cryptography (ECC), Elliptic-curve Diffie-Hellman (ECDH) algorithm, a key agreement protocol, the Elliptic Curve Digital Signature Algorithm (ECDSA), a variant of the Digital Signature Algorithm (DSA) which uses elliptic curve cryptography proposed by the National Institute of Standards and Technology (NIST), and Advanced Encryption Standard (AES)—that rely on complex mathematical computations to secure communications and resist attacks.
Quantum computing relies on Shor’s factorizing algorithm, which is based on modular arithmetic, quantum parallelism, and quantum Fourier transformations, enabling the cracking of large numbers at quicker, exponential paces. Additionally, many of the traditional cryptographic encryption techniques continue to be relevant even with quantum technologies, however they need to be enhanced to work effectively in quantum environments.
Security, Privacy, Protection, and Safety
Current centralized architectures, infrastructure and data repositories lack transparency in data collection, suffer from distributed processing methods and storage, demonstrate poor management and governance, have limited safety and protection, minimal data privacy protocols, and lax security controls.
The large, centralized data repositories holding an individual’s personal, sensitive and private records, suffering from these key limitations, are therefore prone to cyberattacks and regular breaches. Organizations rely on public-key cryptography to secure their online transactions and communication, and any compromise of these systems (including autonomous) has far-reaching consequences.
Transitioning from classical computing techniques to quantum computing algorithms and adopting AI technologies, including AI agents or Agentic AI solutions, is fundamentally transforming the way in which software/application systems are designed, built, implemented, integrated and operated across the enterprise.
However, the transition (e.g., classical to quantum, changes in software development practices, and use of AI agents) has also opened the doors to high-profile security breaches and data leakages, where large amounts of confidential and sensitive records have been hacked. These hacks occur because emerging technologies like quantum computing, and the use of Agentic AI and multi-agents are also being used to threaten traditional encryption algorithms and methods.
Despite the use of cybersecurity practices, currently used encryption protocols remain vulnerable to quantum attacks thereby elevating risk, compromising confidentiality and integrity of sensitive information systems, and reducing data privacy, protection, and security.
The impact of emerging technologies is not limited to encryption algorithms but also affects data protection and privacy. While data anonymization and pseudonymization techniques which mask certain pieces of data can provide some level of safety and protection against classical attacks, these techniques may be insufficient against quantum attacks.
Thus, there is an urgent need for post-quantum encryption methods and quantum-resistant cryptography to protect existing systems, as we build and migrate to NextGen autonomous systems.
How to Secure and Protect Data in an Autonomous Future
Data security is fundamental to trusting autonomous systems. The current networking infrastructure has significant security vulnerabilities that have remained unresolved for many years. Quantum computing offers the potential to advance security which is critical for autonomous systems. The development of quantum-resistant cryptography and secure multi-party computation protocols requires significant advances in theoretical computer science and mathematics.
Decentralization
Addressing challenges that come with the ubiquitous use of advanced computing technologies requires a strategic shift away from centralized architecture control models used today—like the border gateway protocol (BGP), the standard interdomain routing protocol used, and the public key infrastructure (PKI), a common security protocol—to an open security architecture which is more comprehensive, operates independently of external authority and is decentralized with governance guardrails embedded to enable speed, reuse, and control.
Decentralization is key to the adoption of quantum computing, AI technology and cryptography because it improves user trust, security, and transparency. With decentralized quantum computing decision making and security control is distributed across the enterprise system, allowing the organization to build resistance to censorship, eliminate single points of failure, enable secure and efficient communication, and minimize data manipulation because no one system entity has exclusive control over it.
A decentralized security framework that modifies the current underlying business processes and architecture and uses encryption, anonymization and tokenization techniques underpinned by standards, offers a viable solution that prioritizes software and system-level optimizations.
Unified Approach to Security
In a quantum environment, classical encryption methods, particularly those based on public-key cryptography, do not fully secure information even when using modern digital security frameworks based on cryptographic protocols. Quantum-resistant protocols with quantum key distribution are required to circumvent breaches and securely transmit sensitive data. For example, cryptographic techniques, like lattice-based cryptography and secure multi-party computation protocols are capable of resisting quantum attacks.
While quantum-resistant protocols offer viable solutions and techniques like lattice-based cryptography offers many advantages, there are several challenges associated with implementing these technologies. One of the main challenges is the need for high-performance computing resources to solve lattice problems efficiently.
Another challenge is the lack of standardization and interoperability between different lattice-based cryptographic systems. Overall, further development is needed especially in the areas of data anonymization and tokenization in autonomous systems to ensure these new solutions operate well to secure and protect data.
To future-proof data security a proactive cybersecurity mindset and unified systemic approach are key in detecting and suppressing the propagation of attacks. What is required is a solution that automates workflows, enhances visibility, protection, and compliance and simplifies security by providing a clear view of data security and overall risk.
Such a solution is seen in the Integrated Decentralized Security Framework (IDSF) which can help organizations identify, assess, and manage data security risks across multiple and distributed environments based on elements like:
Elements of an Integrated Decentralized Security Framework (IDSF)
- Embracing a human-centered process-oriented approach that both harmonizes and enhances trust.
- Continuous monitoring and improving data security operations.
- Implementing strong encryption, masking, and tokenization to protect data at rest and in motion.
- Establishing a quantum/post-quantum ready foundation to protect data against AI and quantum computer powered threats.
- Ensuring compliance with strict international data protection regulations, privacy laws, and compliance frameworks (GDPR, CCPA, PCI, HIPAA).
- Promoting policy-driven data governance.
- Classifying and categorizing data based on sensitivity and business value, while assessing risks.
- Applying data protection, including encryption, and secure the metadata
- Building quantum/post-quantum ready controls to counter future AI and quantum-powered threats.
- Exploring and implementing advanced modern cryptographic techniques and algorithms such as crypto-agility and perfect secrecy that offer a reactive and proactive approaches respectively to secure information and data in computer systems.
These solutions aim to protect data in a quantum world where AI systems reflect the data on which they are trained, and classical encryption algorithms may no longer be secure. There are several new techniques that are in the developmental phase for securing and protecting data in a quantum world. Adopting a decentralized unified framework pivots away from the centralized models and shifts towards user-centric systems that empower individuals to maintain control over their personal data and take back ownership of information.
A decentralized unified framework extends traditional security, by emphasizing a proactive, comprehensive, and adaptive approach based on core security principles to identify new threats, improve capabilities, and manage emergent risks.
Leveraging Strengths of Quantum and Classical Computing
Quantum computing, with its ability to process vast amounts of information simultaneously, is both process and resource intensive. Qubits in particular are sensitive and face significant obstacles in terms of reliability and scalability. Until the technical challenges inherent in this as yet nascent technology are resolved, it would benefit businesses to implement a hybrid computing model where most enterprises leverage the strong suits of each and split tasks between classical and quantum machines.
Classical computers can handle most mundane tasks/operations through classical algorithms allowing the more complex and highly specialized functions to be delegated to a shared quantum computing infrastructure.
Instead of developing separate security solutions for classical, quantum, and AI-driven systems, given the convergence of all three in this current transition period, it would be beneficial to blend classical, quantum and AI techniques to develop unified approaches to security and protection that leverage the strengths of each computing method while minimizing their weaknesses.
To be prepared to lead in the quantum world and set the stage for a post-quantum era, enterprises must work to bridge the gap to a quantum-enabled future. Over time, businesses should transition to quantum computing as needed for relevant complex functions while continuing to leverage their existing classical computing assets.
Final Thoughts
Data security is in a state of transformation and the future lies in the combination of powerful technologies like quantum computers and artificial intelligence (AI) to create value for organizations. The combination of AI and quantum technology involves the convergence of machine learning (M/L), quantum algorithms and quantum computing to create new solutions.
Businesses must decide what proportion of their enterprise relies on classical vs quantum computing and how to integrate AI technologies into their functions and choose to expand their quantum capabilities accordingly while prioritizing security, privacy, protection and safety.
The proposed integrated decentralized security framework (IDSF) enhancements would provide safer and more secure environments, but still require further research and development, in order to make autonomous systems immune to interception and to enhance their security. Furthermore, a unified decentralized ecosystem approach minimizes the risk of data privacy and security losses and allows the individual to retain greater control over their information.
Tools & Platforms
How AI and Technology Are Powering the Future of Healthcare – Conduit Street
MACo’s 2025 Summer Conference Solutions Showcase features thought leaders and industry experts presenting resources and best practices to assist local governments. Check out this session hosted by Kaiser Permanente!
About Kaiser Permanente:
Founded in 1945, Kaiser Permanente is recognized as one of America’s leading healthcare providers and not-for-profit health plans. They currently serve more than 11.3 million members in eight states and the District of Columbia. Care for members and patients is focused on their total health and guided by their personal physicians, specialists, and team of caregivers. Their world-class medical teams are supported by industry-leading technology advances and tools for health promotion, disease prevention, care delivery, and chronic disease management. Kaiser Permanente (KP) exists to provide high-quality, affordable healthcare services and to improve the health of its members and the communities it serves. They are trusted partners in total health, collaborating with people to help them thrive and creating communities that are among the healthiest in the nation.
SOLUTIONS SHOWCASE SESSION:
Title: Reimagining Care: How AI and Technology Are Powering the Future of Healthcare
Description: This session explores how a large integrated healthcare organization uses technology and AI to improve care delivery and clinical outcomes. Speakers will discuss responsible AI—focusing on ethical, transparent, and explainable tools—and why eliminating bias is essential to building trust and delivering equitable care. Our goal is to strengthen patient connections, surface relevant insights at the point of care, and support better care decisions by equipping clinicians with smart, intuitive tools. By integrating advanced technology into clinical practice, Kaiser Permanente is making care more seamless, efficient, and high-quality. Join this session to learn how these innovations are helping Kaiser Permanente enhance the patient experience and transform how healthcare is delivered.
Date: 8/15/2025
Time: 9:00 AM – 9:30 AM
Be sure to register for MACo’s Summer Conference to attend this session and many more!
MACo’s Summer Conference, “Funding the Future: The Evolving Role of Local Government,” will be held at the Roland Powell Convention Center in Ocean City, MD, on August 13-16, 2025.
Learn more about MACo’s Summer Conference:
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure