
AI, gen AI and the Agentic AI gold rush: International Standardisation to the Rescue
Unlocking the AI Gold Rush: International Standardization’s Crucial Role in Securing Agentic AI
(This article was generated with AI and it’s based on a AI-generated transcription of a real talk on stage. While we strive for accuracy, we encourage readers to verify important information.)
Mr. Arnaud Taddei, Global Security Strategist at Broadcom and Chair of ITU-T Study Group 17 (SG17), addressed the “gold rush” of AI, Generative AI (Gen AI), and Agentic AI. He identified a critical challenge: the lack of interoperability. International standardization is essential to rescue this situation and ensure a secure, functional future for AI.
Drawing on his background at CERN and as SG17 chair for trust and security (responsible for standards like X.509, vital for HTTPS), Mr. Taddei highlighted foundational frameworks. His role at Broadcom involves advising CXOs on security strategy, providing practical insights into global security challenges.
Digital transformation has expanded the attack surface, giving cyber attackers an advantage. The WannaCry incident warned of global paralysis. COVID-19 accelerated digitalization, empowering criminals. Security is poor, with global financial harm escalating rapidly: $6 trillion in 2021, $10 trillion in 2025, and $20 trillion by 2030.
AI has evolved from machine learning to Generative AI, introducing deep fakes and automated code generation. The most disruptive phase is Multi-Agentic AI, an “unprecedented storm.” Here, AI agents collaborate, exchanging Large Language Models (LLMs) and code, enabling adaptive, automated attacks at immense scale.
Securing AI presents complex challenges: prompt injection, data leakage, hallucination, and PII issues. For Agentic AI, defense needs robust digital identity management and a trust control plane at the protocol level. Many fundamental components for this defense are currently missing, leaving systems vulnerable.
The “elephant in the room” for Agentic AI is non-human digital identity. While human identity management has progressed, the framework for non-human entities is largely undeveloped. A governance body is urgently needed to establish trust, verify agents, and detect compromised or fake AI entities.
Standardization is crucial to remove friction and prevent billions of dollars wasted due to lack of interoperability among major tech players. Mr. Taddei stressed that early standardization at the research level is imperative for Agentic AI, despite the risk of stifling innovation, given the potential for massive financial loss.
ITU-T SG17 is actively addressing these challenges. A workshop identified the need for an “OSI model” for agents, attracting significant participation from global tech giants. SG17 has established new groups for AI Security and digital identity, with 40 standards under development and three upcoming workshops focusing on trustable identities and trust management.
