Artificial IntelligenceIEEE Standards Association Announces Joint Specification V1.0 for the Assessment of the Trustworthiness of AI Systems

In a major step forward for artificial intelligence governance, the IEEE Standards Association (IEEE SA), the global, consensus-building standards development organization (SDO) of IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, announces Joint Specification V1.0 for the Assessment of the Trustworthiness of AI Systems. Based on IEEE CertifAIEd, VDE VDESPEC 90012, and the Positive AI framework, this new specification aims to unify and streamline the evaluation of AI systems across Europe and the world.

“This further underscores the impact of the IEEE CertifAIEd specifications and acknowledges IEEE’s global standards and the continuous partnership and impact that the IEEE Standards Association has in its collaborations within Europe.”

A Unified Approach to Trustworthy AI

Recognizing the fragmented landscape of AI trustworthiness frameworks, IEEE, Positive AI, IRT SystemX (Coordinator of Confiance.ai), and VDE came together to create a joint specification that combines the strengths of IEEE CertifAIEd, VDE VDESPEC 90012, and the Positive AI framework. By working together, they have paved the way for responsible AI governance that is intended to comply with global regulatory requirements and frameworks while promoting innovation, competitiveness, and quality.

Alignment with EU AI Act and Ethical Guidelines

Joint Specification V1.0 is carefully designed to follow the recommendations of the 2024 EU AI Act in the design and use of AI models. This helps to ensure that requirements are met while upholding the highest ethical standards in AI design and deployment.

Thorough and Nuanced Evaluation

Unlike traditional pass/fail assessments, Joint Specification V1.0 provides a foundation for an AI Trust label, introducing a nuanced grading system across six key principles:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Social and environmental well-being

“This approach allows for a detailed and more complete evaluation of AI systems, embracing a wide range of indicators and addressing the dependencies between the principles,” said Jean-Philippe Faure, European Standardization Partnerships, IEEE SA, and Joint Specification V1.0 project lead.

Global Recognition and Adoption

With the IEEE CertifAIEd AI Ethics Certification Program, IEEE has pioneered AI certification, testing and training, helping increase the quality of and trust in AI systems. Already, 167 people in 28 countries have become IEEE CertifAIEd Authorized Assessors.

Joint Specification V1.0 is also gaining recognition at national and global levels and is set to become the cornerstone of the operationalization of trustworthy AI. The specification was contributed to the IEEE to become standardized under the IEEE P8000 working group, accelerating its adoption and impact.

“MISSION KI, the German national initiative for Artificial Intelligence and Data Economy, has used Joint Specification V1.0 in their development of transparent AI quality and testing standards,” said Ravi Subramaniam, Senior Director, Product, Business Development & Marketing, IEEE SA. “This further underscores the impact of the IEEE CertifAIEd specifications and acknowledges IEEE’s global standards and the continuous partnership and impact that the IEEE Standards Association has in its collaborations within Europe.”

Driving Competition and Quality

As companies and industries seek concrete solutions to enhance AI practices, Joint Specification V1.0 offers a robust framework to improve competitiveness and help ensure high quality products bringing many benefits, including satisfied customers, management, and employees.

A Vision for the Future

The EU has taken a significant step in advancing trustworthy AI systems with the publication of the AI Act. Joint Specification V1.0 is poised to serve as a pivotal resource in implementing trustworthy AI, offering a foundation for an AI Trust label aligned with the values of the AI Act.

IEEE aims to continue playing a crucial role in this evolving landscape by becoming a leading force in the dissemination of the upcoming label through expansion of the IEEE CertifAIEd program.

All interested parties, including industry leaders and policymakers, are encouraged to join IEEE, Positive AI, IRT SystemX, and VDE in the development of the AI Trust label, a certification program based on Joint Specification v1.0. This collaborative effort aims to ensure that AI systems are developed and deployed responsibly, in alignment with European values and regulatory standards.

business wire

Leave a Reply

Your email address will not be published. Required fields are marked *