European technical standard-setting process open to influence by industry players, experts warn

AI experts are calling for a ‘twin track’ approach to produce technical standards for digital services, reducing potential exploitation by industry players.

A screen displaying computer code

AI experts are calling for a ‘twin track’ approach for producing technical standards to support digital legislation, enabling the European Commission to preside over the drafting process and reducing potential exploitation by powerful industry players.

The current approach to the production of technical standards by the European Standards Organisations works well when aimed at ensuring interoperability between complementary products and services and providing minimum levels of health and safety protection. However, experts involved in the ongoing standard-setting process for AI standards have warned that it suffers from serious shortcomings, particularly when it comes to fundamental rights protection.

Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham and Patricia Shaw, Chief Executive at Beyond Reach Consulting, have argued that the standard setting process currently being relied upon by the EU to set technical standards for the EU AI Act lacks adequate public scrutiny, accountability and opportunities for participation.

AI is considered to be the ‘next frontier’ for widespread technological transformation and commercial success. But AI brings novel challenges for lawmakers currently grappling with how to guard against their potential threats.

Professor Karen Yeung, University of Birmingham

As a result, well-resourced industry players can influence the content of technical standards in ways that serve their own commercial interests, weakening the protection for affected stakeholders. This could widen the opportunities for these products and services to be designed in ways that expose individuals and groups to greater risks of interference with their fundamental rights.

Yeung and Shaw argue that the processes for standards for digital products and services that attract significant political disagreement should be presided over by the European Commission in the form of ‘common specifications’.

Professor Karen Yeung said: “The technical standards currently being produced for high-risk AI systems under the EU AI Act by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) Joint Technical Committee 21 (JTC 21) are a good example of the kind of technical standards for which a more open, participatory and publicly accountable process is needed.

“AI is considered to be the ‘next frontier’ for widespread technological transformation and commercial success. But AI brings novel challenges for lawmakers currently grappling with how to guard against their potential threats. These are not confined to health and safety risks, but also to democratic values, including fundamental rights such as the right to privacy, to freedom of information and non-discrimination.”

While AI providers are not legally required to follow technical standards, the incentives to do so ‘voluntarily’ are, for most firms, impractical to ignore, thanks to the legal presumption of compliance that is conferred on those who voluntarily comply with a harmonised European standard, which the JTC21 is currently attempting to draft. The European Commission deadline for the delivery of standards by JTC21 is set for August this year, but it is now known that this deadline will not be met.

AI and the associated field of technology development are growing and evolving rapidly. It is imperative that the EU and other trading blocs or national governments create standards to ensure that everyone can benefit from it and that persons adversely affected by its use are protected from AI-generated interference with fundamental rights.

Professor Karen Yeung, University of Birmingham

Professor Yeung added: “Software-based applications and services that are informed by the analysis of data that affect individuals, groups, societal rights and interests, such as AI, need to have standards agreed through a process of open, transparent public dialogue and deliberation, so that businesses, including tech developers, know where they stand, and to ensure that the public are properly protected.

“Now that it is known that these standards will not be ready by the August deadline, it offers the EU Commission an opportunity to step in and issue common specifications established through a more open, transparent and participatory process, which is exactly what our recommendations call for.”

In the report, the experts recommend a ‘twin track’ approach to the development of harmonised European standards:

  • Track 1: proceed by standards produced by the European Standards Organisations (CEN, CENELEC and ETSI) on request from the European Commission if the aim of the proposed standard is predominantly to:
    • facilitate interoperability between complementary products and services; and/or
    • to provide minimum levels of health and safety protection.
  • Track 2: proceed by common specifications presided over by the European Commission, if the product, service, or production process for which the standards are required have implications for multiple stakeholders in a variety of potential contexts. This should include contexts in which stakeholders' interests come into direct conflict with those of the provider of the product, service or production process, and/or the need for intervention to safeguard the legitimate interests of stakeholders attracts significant political debate. The resulting standards in this track would be better described as ‘socio-technical’ rather than ‘merely’ technical standards.

The report also calls for further analysis to be undertaken to identify, in concrete terms, the institutional and procedural mechanisms and the applicable substantive eligibility criteria, to determine whether standards drafting should proceed via Track 2 rather than Track 1.

Professor Yeung concluded: “AI and the associated field of this technology development are growing and evolving rapidly, and the EU and other trading blocs or national governments must create standards to ensure that everyone can benefit from it, and ensure that persons adversely affected by their use are protected from AI generated fundamental rights interreferences as well as damage to their health, safety and the environment.”

Notes for editors

  • For media inquiries, please contact Ellie Hail, Communications Officer, University of Birmingham on +44 (0)7966 311 409. Out-of-hours, please call +44 (0) 121 414 2772.

  • The University of Birmingham is ranked amongst the world’s top 100 institutions. Its work brings people from across the world to Birmingham, including researchers, teachers and more than 8,000 international students from over 150 countries.