1 week ago 7

Time's running out on AI standardisation process, Dutch watchdog warns

The pace needs to pick up on the process of standardisation, the watchdog warns, as the provisions of the EU’s AI Act will soon start to apply.

The process to set up standards for artificial intelligence systems and products under the AI Act will need to be faster, a senior official at the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP) has told Euronews.

“The standards are a way to create certainty for companies, and for them to demonstrate compliance. There is still a lot of work to be done before those standards are ready. And of course time is starting to run out,” said Sven Stevenson, who is the director of coordination and supervision on algorithms at the agency.

“Standardization processes normally take many years. We certainly think that it needs to be stepped up,” he added.

The European Commission asked the standardisation organisations CEN-CELENEC and ETSI in May last year to prepare the underlying standards for the industry, and this process is still ongoing. 

The AI Act - the world’s first set of rules aimed at regulating machine learning tools, including virtual assistants and large language models such as ChatGPT - entered into force in August, but the provisions will start to apply gradually. For example, rules for providers of GPAI models will become effective in August next year. 

Clearview AI

The AP – which is also the data protection authority overseeing the General Data Protection Regulation (GDPR) – will likely have the shared competence to check companies’ compliance with the AI Act with other agencies including the RDI, the Dutch regulator in charge of digital infrastructure. So far, the AP has some 20 people working on AI. 

All EU member states have until August next year to appoint their regulator in charge of AI, and in most EU countries national data protection authorities seem to be a likely fit. 

In its role as data regulator, the AP has already addressed situations in which companies' AI tools of were found in breach of the GDPR. For example, in September, it fined US facial recognition company Clearview AI €30.5 million for building an illegal database with photos, unique biometric codes and other information linked to Europeans.

In similar future cases, the AI Act would be complementary to the GDPR, since it’s first and foremost about processing personal data. “The AI Act would apply in the sense that it’s about product safety. If we prohibit this in the Netherlands, it will need to be consistent between the member states,” Stevenson said.

Sandbox used in the Netherlands

In Brussels, the Commission has set up a so-called AI Pact, which helps businesses get ready for the incoming AI Act through workshops and joint commitments. On a national level, the AP is also organising a sandbox and pilot project, with the RDI and Economic Affairs Ministry, to help get companies familiarise themselves with the rules.

“For the upcoming sandbox, which will be up and running in 2026, we want to open up to those AI systems that will have a broader impact, and that will help companies with similar ideas. We want to create clarity for them on how to work on this in line with the AI Act,” Stevenson said.

Besides this, as of December 2022 the Dutch government has published a public algorithm register. The administration wants algorithms used by the government to be legally checked for discrimination and arbitrariness in a bid to ensure transparency and make the outcome of algorithms more explicable. 

Read this article on source website