Security & Privacy Radar 2026

Security & Privacy Radar 2026

Légende Legende

Protection and compliance of GenAI The growing use of generative AI tools increases the risk that users share sensitive data in their prompts.  Anything sent to an AI agent can be assumed to be public. AI policies can help but are not enough and mediated use of generative AI tools aims to detect risk, prevent data leaks, etc.
Digital Sovereignty Digital Sovereignty aims to increase national control over its digital infrastructure, data, technologies, and processes. Reducing dependency on foreign technologies strengthens trust, resilience, and long-term strategic autonomy.
Cryptographic compliance The obligations related to cryptography are evolving. The validity period of TLS certificates is shrinking and migration to post-quantum cryptography will be mandated. Organizations should be ready to adapt to these global shifts.
Post-Quantum Advanced Pseudonymization Advanced cryptographic techniques for identifier pseudonymization are powerful tools to improve the protection of personal data. Post-quantum alternatives on existing techniques based on classical cryptography are required.
Cryptographic maturity The capability to manage, adapt, and secure its use of cryptography, especially in evolving landscapes like post-quantum computing, moving to automated, integrated systems, assessed via maturity models, planning & implementation of crypto-agility.
Security policy as code To streamline & automate processes related to security and cryptography in large environments, policies expressed as code help organizations to detect in real-time where it violates the policy (E.g., using an insecure library or cryptographic algorithm).
Data confidentiality management Several trends lead to be wary of storing various types of documents on 3rd-party managed infrastructure, especially in foreign states. Data encryption on the client-side is an alternative to complex confidential computing as the data on the 3rd party infrastruct is always encrypted.
Zero-trust security (models) The main concept behind zero trust is “never trust, always verify,” which means that devices should not be trusted by default, even if they are connected to a corporate network such as the corporate LAN and even if they were previously verified. Also known as “perimeterless security.”
Confidential computing Confidential computing (CC) allows an entity to do computations on data without having access to the data itself and may facilitate collaboration between distrusting organisations. CC can be realised in a centralised way with trusted execution environment.
Cloud containers security Cloud containers security refers to the implementation of security processes, testing and controls for container-based architectures. Container management tools provide capabilities to deploy, scale, and monitor container infrastructure and can expand the potential attack surface.
Cybersecurity AI technologies Cybersecurity AI technologies are non-traditional methods for improving analysis methods in the security technology of systems and applications (e.g., user behaviour analytics, improved detection of potential attacks from system logs).
API security testing API security testing is a specialized form of cybersecurity evaluation focused on identifying vulnerabilities and logic flaws within APIs. Since APIs serve as the bridges between different software services and data sources, they are high-priority targets for attackers.
Desinformation detection False, misleading, or manipulated information undermines trust in public systems. Better identification and detection of (AI-generated) fraudulent claims, fake documents, identity misuse, etc, will improve verification and validation processes.
Crypto-agility Crypto agility is the ability of an IT system to rapidly switch between cryptographic algorithms, keys, or protocols without disrupting operations.
Homomorphic encryption Homomorphic encryption is a very powerful tool enabling computations on encrypted datat. When used with supporting technologies and best practices, it can significantly reduce the risk of sharing private data in the era of digital business.
Privacy by design & Privacy engineering The practice of embedding privacy principles and measures into the design, development, and implementation of technology systems, processes, and practices.
Threat modelling of AI systems Threat modeling for AI systems is a specialized, proactive security process that adapts traditional methods (like STRIDE) to identify risks unique to AI, such as data poisoning, adversarial attacks (e.g., prompt injection), and model misalignment.
Post-quantum cryptography Post-Quantum Cryptography (PQC) involves creating new encryption methods that are resistant to attacks from powerful quantum computers, which could break today’s widely used public-key systems (like RSA/ECC)
Dark web / Deep web The “deep web” is the part of the web that is not indexed by search engines. The “dark web” is the part of the deep web that is only accessible with specific software, configurations, or permissions. Both are used for criminal activities, such as drug and medicine trafficking, illegal employment, preparing hacking operations, and selling stolen data. But they are also used in totalitarian countries by dissidents and journalists.
LLM guardrails & alignment Techniques that constrain and guide model behavior to ensure outputs remain safe, reliable, and consistent with legal, ethical, and organizational requirements. Includes strategies to increase robustness against attempted circumvention.
Zero-knowledge proofs Cryptographic methods allowing one party to convince another that a statement is true, without revealing any information beyond the statement, like proving you’re over 18 based on your eID. It has potential in privacy-preserving identity management.