Tools & Platforms
Creating a Secure, Private and Safe Autonomous Future with Quantum Computing and AI Technologies
ByteSafe’s CTO Raghavan Chellappan offers commentary on creating a secure, private, and safe autonomous future with quantum computing and AI. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.
Quantum computing is changing classical architecture, information processing and security frameworks, and offers a path to address key security, privacy, and safety issues in autonomous systems. Emerging technologies such as generative AI (GenAI), 5G expansion and 6G transitions, Augmented and Virtual Reality (AR/VR), Internet of Things (IoT), Blockchain, and Edge Computing, are proliferating at a fast pace, disrupting and revolutionizing whole industries and fundamentally shifting the digital landscape.
Unlike in earlier innovation cycles, these technologies are accelerating and maturing in parallel. Such an evolutionary pattern coupled with the increasing intersection between the technologies poses significant threats, vulnerability risks, ethical concerns and regulatory challenges to existing applications and growing connected autonomous systems.
Such complexity makes it harder to manage, secure and safeguard the flow of data across distributed environments (cloud, on-premise or hybrid). It’s worth recognizing that quantum technologies can be used in a variety of ways from information processing to data encryption, and if incorrectly implemented, can even facilitate data breaches. As a result, how quantum approaches are used matters.
One-off, non-integrated or silver bullet solutions will not resolve problems as businesses adopt technological advancements. Instead, the solution set should start with a shift in the mindset of how humans engage and collaborate with autonomous systems securely and safely as they adopt and transition into an agentic-driven digital way of life.
In a human-centric approach, users and enterprises go beyond relying on secure, anonymized connections to proactive enhanced data security and protection based on a decentralized, open security framework.
Key Elements in the Transition to Quantum Computing
Modern architecture embraces greater levels of autonomy to address industry needs for greater productivity. How enterprises design, develop and deploy software applications is changing—moving towards a greater integration of autonomous systems and converging technologies supporting new methods of architectural design. To understand how this works it’s useful to look at the shift from classical to quantum computing.
It is useful to note that regardless of whether they operate in classical or quantum computing environments, autonomous systems need encryption, security, privacy and safety requirements to ensure data are protected.
Architecture and Information Processing
While classical and quantum computing are based on different architectures and process information differently, they also share some common elements.
Classical computing encodes, processes and stores information in bits. A classical bit uses a base 2 numbering system and can only exclusively be in one of two states, as a ‘0’ or ‘1’ akin to flipping a coin (heads = ‘0’ or tails = ‘1’ or vice versa). These two values exist in 2D or two dimensions only and measurements are deterministic in nature.
Classical computers follow sequential operations (i.e., passing one instruction at a time) by applying Boolean algebraic principles—based on binary variables and logical operations (or logical gates)—to process bits (‘0s’ = Off/Fail and ‘1s’ = On/Pass), and manipulate and transform the information depending on a desired calculation (inputs, processing, outputs) and presenting a string of bits as the output.
Interest in “quantum computation” has grown substantially in recent years. Quantum architecture consists of quantum circuits, quantum bits (Qubits), and quantum gates on which all operations are performed. Quantum computing is based on quantum mechanics (including the principles of superposition and entanglement), so even though qubits still rely on “0s” and “1s” a single qubit can be in one of infinitely many superpositions of |0⟩ and |1⟩ states.
Thus, these values can be visualized in “3D” or three dimensions, and quantum qubit measurements are probabilistic in nature rather than deterministic. Quantum computing, with additional layers required to process information, is consequently more complicated compared to classical computing. This complexity in encoding, processing and storing information in qubits makes quantum information more prone to errors, which in turn reduces stability in quantum systems and makes it harder to manipulate compared to classical information.
It’s worth remembering that while qubits are primarily used in processing information, the output still needs to be presented in terms of the binary ‘0s’ and ‘1s’ of classical computing.
Cryptography and Encryption
The cryptographic algorithm (e.g., symmetric or asymmetric) is one of the most basic controls available to protect sensitive information from unauthorized disclosure in many different environments including autonomous systems through encryption. Cryptography uses mathematical algorithms in the transport layer security (TLS) and secure socket shell environments to transform information into a form that’s not readable by unauthorized individuals yet provides authorized individuals with the ability to transform that information back into readable forms again by using a decryption algorithm.
Classical computing uses traditional encryption techniques—such as Rivest-Shamir-Adleman (RSA), Elliptic Curve Cryptography (ECC), Elliptic-curve Diffie-Hellman (ECDH) algorithm, a key agreement protocol, the Elliptic Curve Digital Signature Algorithm (ECDSA), a variant of the Digital Signature Algorithm (DSA) which uses elliptic curve cryptography proposed by the National Institute of Standards and Technology (NIST), and Advanced Encryption Standard (AES)—that rely on complex mathematical computations to secure communications and resist attacks.
Quantum computing relies on Shor’s factorizing algorithm, which is based on modular arithmetic, quantum parallelism, and quantum Fourier transformations, enabling the cracking of large numbers at quicker, exponential paces. Additionally, many of the traditional cryptographic encryption techniques continue to be relevant even with quantum technologies, however they need to be enhanced to work effectively in quantum environments.
Security, Privacy, Protection, and Safety
Current centralized architectures, infrastructure and data repositories lack transparency in data collection, suffer from distributed processing methods and storage, demonstrate poor management and governance, have limited safety and protection, minimal data privacy protocols, and lax security controls.
The large, centralized data repositories holding an individual’s personal, sensitive and private records, suffering from these key limitations, are therefore prone to cyberattacks and regular breaches. Organizations rely on public-key cryptography to secure their online transactions and communication, and any compromise of these systems (including autonomous) has far-reaching consequences.
Transitioning from classical computing techniques to quantum computing algorithms and adopting AI technologies, including AI agents or Agentic AI solutions, is fundamentally transforming the way in which software/application systems are designed, built, implemented, integrated and operated across the enterprise.
However, the transition (e.g., classical to quantum, changes in software development practices, and use of AI agents) has also opened the doors to high-profile security breaches and data leakages, where large amounts of confidential and sensitive records have been hacked. These hacks occur because emerging technologies like quantum computing, and the use of Agentic AI and multi-agents are also being used to threaten traditional encryption algorithms and methods.
Despite the use of cybersecurity practices, currently used encryption protocols remain vulnerable to quantum attacks thereby elevating risk, compromising confidentiality and integrity of sensitive information systems, and reducing data privacy, protection, and security.
The impact of emerging technologies is not limited to encryption algorithms but also affects data protection and privacy. While data anonymization and pseudonymization techniques which mask certain pieces of data can provide some level of safety and protection against classical attacks, these techniques may be insufficient against quantum attacks.
Thus, there is an urgent need for post-quantum encryption methods and quantum-resistant cryptography to protect existing systems, as we build and migrate to NextGen autonomous systems.
How to Secure and Protect Data in an Autonomous Future
Data security is fundamental to trusting autonomous systems. The current networking infrastructure has significant security vulnerabilities that have remained unresolved for many years. Quantum computing offers the potential to advance security which is critical for autonomous systems. The development of quantum-resistant cryptography and secure multi-party computation protocols requires significant advances in theoretical computer science and mathematics.
Decentralization
Addressing challenges that come with the ubiquitous use of advanced computing technologies requires a strategic shift away from centralized architecture control models used today—like the border gateway protocol (BGP), the standard interdomain routing protocol used, and the public key infrastructure (PKI), a common security protocol—to an open security architecture which is more comprehensive, operates independently of external authority and is decentralized with governance guardrails embedded to enable speed, reuse, and control.
Decentralization is key to the adoption of quantum computing, AI technology and cryptography because it improves user trust, security, and transparency. With decentralized quantum computing decision making and security control is distributed across the enterprise system, allowing the organization to build resistance to censorship, eliminate single points of failure, enable secure and efficient communication, and minimize data manipulation because no one system entity has exclusive control over it.
A decentralized security framework that modifies the current underlying business processes and architecture and uses encryption, anonymization and tokenization techniques underpinned by standards, offers a viable solution that prioritizes software and system-level optimizations.
Unified Approach to Security
In a quantum environment, classical encryption methods, particularly those based on public-key cryptography, do not fully secure information even when using modern digital security frameworks based on cryptographic protocols. Quantum-resistant protocols with quantum key distribution are required to circumvent breaches and securely transmit sensitive data. For example, cryptographic techniques, like lattice-based cryptography and secure multi-party computation protocols are capable of resisting quantum attacks.
While quantum-resistant protocols offer viable solutions and techniques like lattice-based cryptography offers many advantages, there are several challenges associated with implementing these technologies. One of the main challenges is the need for high-performance computing resources to solve lattice problems efficiently.
Another challenge is the lack of standardization and interoperability between different lattice-based cryptographic systems. Overall, further development is needed especially in the areas of data anonymization and tokenization in autonomous systems to ensure these new solutions operate well to secure and protect data.
To future-proof data security a proactive cybersecurity mindset and unified systemic approach are key in detecting and suppressing the propagation of attacks. What is required is a solution that automates workflows, enhances visibility, protection, and compliance and simplifies security by providing a clear view of data security and overall risk.
Such a solution is seen in the Integrated Decentralized Security Framework (IDSF) which can help organizations identify, assess, and manage data security risks across multiple and distributed environments based on elements like:
Elements of an Integrated Decentralized Security Framework (IDSF)
- Embracing a human-centered process-oriented approach that both harmonizes and enhances trust.
- Continuous monitoring and improving data security operations.
- Implementing strong encryption, masking, and tokenization to protect data at rest and in motion.
- Establishing a quantum/post-quantum ready foundation to protect data against AI and quantum computer powered threats.
- Ensuring compliance with strict international data protection regulations, privacy laws, and compliance frameworks (GDPR, CCPA, PCI, HIPAA).
- Promoting policy-driven data governance.
- Classifying and categorizing data based on sensitivity and business value, while assessing risks.
- Applying data protection, including encryption, and secure the metadata
- Building quantum/post-quantum ready controls to counter future AI and quantum-powered threats.
- Exploring and implementing advanced modern cryptographic techniques and algorithms such as crypto-agility and perfect secrecy that offer a reactive and proactive approaches respectively to secure information and data in computer systems.
These solutions aim to protect data in a quantum world where AI systems reflect the data on which they are trained, and classical encryption algorithms may no longer be secure. There are several new techniques that are in the developmental phase for securing and protecting data in a quantum world. Adopting a decentralized unified framework pivots away from the centralized models and shifts towards user-centric systems that empower individuals to maintain control over their personal data and take back ownership of information.
A decentralized unified framework extends traditional security, by emphasizing a proactive, comprehensive, and adaptive approach based on core security principles to identify new threats, improve capabilities, and manage emergent risks.
Leveraging Strengths of Quantum and Classical Computing
Quantum computing, with its ability to process vast amounts of information simultaneously, is both process and resource intensive. Qubits in particular are sensitive and face significant obstacles in terms of reliability and scalability. Until the technical challenges inherent in this as yet nascent technology are resolved, it would benefit businesses to implement a hybrid computing model where most enterprises leverage the strong suits of each and split tasks between classical and quantum machines.
Classical computers can handle most mundane tasks/operations through classical algorithms allowing the more complex and highly specialized functions to be delegated to a shared quantum computing infrastructure.
Instead of developing separate security solutions for classical, quantum, and AI-driven systems, given the convergence of all three in this current transition period, it would be beneficial to blend classical, quantum and AI techniques to develop unified approaches to security and protection that leverage the strengths of each computing method while minimizing their weaknesses.
To be prepared to lead in the quantum world and set the stage for a post-quantum era, enterprises must work to bridge the gap to a quantum-enabled future. Over time, businesses should transition to quantum computing as needed for relevant complex functions while continuing to leverage their existing classical computing assets.
Final Thoughts
Data security is in a state of transformation and the future lies in the combination of powerful technologies like quantum computers and artificial intelligence (AI) to create value for organizations. The combination of AI and quantum technology involves the convergence of machine learning (M/L), quantum algorithms and quantum computing to create new solutions.
Businesses must decide what proportion of their enterprise relies on classical vs quantum computing and how to integrate AI technologies into their functions and choose to expand their quantum capabilities accordingly while prioritizing security, privacy, protection and safety.
The proposed integrated decentralized security framework (IDSF) enhancements would provide safer and more secure environments, but still require further research and development, in order to make autonomous systems immune to interception and to enhance their security. Furthermore, a unified decentralized ecosystem approach minimizes the risk of data privacy and security losses and allows the individual to retain greater control over their information.
Tools & Platforms
Virginia 911 call center implements AI technology to allow dispatchers to focus on emergency calls – KTVB
Tools & Platforms
In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself
July 11, 2025
SEOUL – A wave of artificial intelligence certifications has flooded the market in South Korea over the past two years.
But according to government data, most of these tests exist only on paper, and have never been used by a single person.
As of Wednesday, there were 505 privately issued AI-related certifications registered with the Korea Research Institute for Professional Education and Training, a state-funded body under the Prime Minister’s Office.
This is nearly five times the number recorded in 2022, before tools like ChatGPT captured global attention. But more than 90 percent of those certifications had zero test-takers as of late last year, the institute’s own data shows.
Many of the credentials are loosely tied to artificial intelligence in name only. Among recent additions are titles like “AI Brain Fitness Coach,” “AI Art Storybook Author,” and “AI Trainer,” which often have no connection to real AI technology.
KT’s AICE is South Korea’s only nationally accredited AI certification, offering five levels of exams that assess real-world AI understanding and skills, from block coding for elementary students to Python-based modeling for professionals. PHOTO: KT/THE KOREA HERALD
Only one of the 505 AI-related certifications — KT’s AICE exam — has received official recognition from the South Korean government. The rest have been registered by individuals, companies, or private organizations, with no independent oversight or quality control.
In 2024, just 36 of these certifications held any kind of exam. Only two had more than 1,000 people apply. Fourteen had a perfect 100 percent pass rate. And 20 were removed from the registry that same year.
For test organizers, the appeal is often financial. One popular certification that attracted around 500 candidates last year charged up to 150,000 won ($110) per person, including test fees and course materials. The content reportedly consisted of basic instructions on how to use existing tools like ChatGPT or Stable Diffusion. Some issuers even promote these credentials as qualifications to teach AI to students or the general public.
The people signing up tend to be those anxious about keeping up in an AI-driven world. A survey released this week by education firm Eduwill found that among 391 South Koreans in their 20s to 50s, 39.1 percent said they planned to earn an AI certificate to prepare for the digital future. Others (27.6 percent) said they were taking online AI courses or learning how to use automation tools like Notion AI.
Industry insiders warn that most of these certificates hold little value in the job market. A local AI industry official told The Korea Herald that these credentials are often “window dressing” for resumes.
“Most private AI certifications aren’t taken seriously by hiring managers,” he said. “Even for non-technical jobs like communications or marketing, what matters more is whether someone actually understands the AI space. That can’t be faked with a certificate.”
Tools & Platforms
Microsoft ‘Puts People First’ With $4 Billion AI Training
Microsoft is launching a $4 billion initiative to train 20 million people in artificial intelligence skills through a new global program called Elevate. The effort, announced by company President Brad Smith, is part of Microsoft’s commitment to “put people first” as AI becomes more integrated into work and education.
The tech titan described the program as a centralized platform for its technology support, donations, and training across schools, colleges, and nonprofits. Through the Elevate Academy, it plans to deliver AI literacy at scale, including offerings like “Hour of AI” and partnerships with educators and labor unions.
A unified platform for Microsoft’s AI training
Microsoft Elevate consolidates the company’s nonprofit and education initiatives into a single operational framework, replacing both its Philanthropies division and Tech for Social Impact team. It combines funding, cloud infrastructure, and AI tools to expand access to training and technology.
The $4 billion will be allocated over five years through a mix of grants, software, and computing resources for K–12 schools, community colleges, and nonprofit organizations worldwide.
Massive training effort for in-demand AI credentials
As part of its credentialing plan, Microsoft is introducing the Elevate Academy, a program to reach millions of learners in just two years. It will offer structured learning across a spectrum of competencies, from digital basics to advanced technical instruction.
Course content will run through LinkedIn Learning and GitHub, two platforms already used within professional and developer communities.
The academy serves as a centerpiece delivery channel, combining investment and infrastructure with partnerships and events to help learners earn industry-recognized certifications.
National and local partners help execute large-scale rollout
Microsoft is working with education nonprofits, labor groups, and government bodies to scale rollout..
“Hour of AI,” developed with Code.org, introduces younger students to foundational concepts through short-form instruction. A summer skilling series extends access outside the school year.
Labor unions are also involved in workforce development, including the National Academy for AI Instruction and courses across the building trades. In Germany, Microsoft is partnering with North Rhine-Westphalia for better regional programs.
Aligning training with public and institutional standards
To support policy alignment, Microsoft is working with public agencies to integrate AI skills into national education systems. It has also partnered with the United Nations, the Vatican, and academic institutions to promote responsible use and ethical standards in AI learning.
These collaborations build on Microsoft’s long-standing involvement in digital literacy and public education initiatives, now carried forward under Elevate’s global scope.
Technology with purpose, training with intent
Microsoft maintains that technology should augment human potential rather than replace it. Elevate reflects that view by focusing on skills amplifying judgment, creativity, and contribution.
Work, the company argues, is deeply tied to identity and dignity, a principle it says must guide how artificial intelligence is developed and deployed. Elevate carries that outlook forward, linking digital learning to values about the role of work in people’s lives.
Another way Microsoft is supporting AI training is by giving $12.5 million in funding to the National Academy for AI Instruction, which the American Federation of Teachers is launching this fall.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained