EU on cusp of adopting AI law

EU on cusp of adopting AI law

The European Union (EU) has moved significantly closer to adopting laws regulating artificial intelligence.

The European Parliament has approved regulations meant to set a worldwide benchmark for the technology, which covers a broad spectrum from automated medical diagnoses to certain types of drones, AI-produced videos referred to as “deepfakes”, and bots such as ChatGPT.

Members of the European Parliament (MEPs) will now negotiate the details with EU countries prior to the draft rules – the AI act – becoming law.

“AI poses numerous social, ethical, and economic questions. However, now is not the moment to press any ‘pause button’. On the contrary, it’s about acting swiftly and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.

The final vote saw 499 in favour, 28 against, and 93 abstentions.

The proposed law also bans emotional recognition, which is used in parts of China to detect tired lorry drivers, for instance, in workplaces and schools.

European Parliament President, Roberta Metsola, described it as “legislation that will undoubtedly set the global standard for years to come”. She stated that the EU now had the capacity to dictate the global tone, and that “a new era of scrutiny” had begun.

Brando Benifei, a co-rapporteur of the Parliament’s AI committee, which advanced the legislation to the voting stage, said that regarding facial recognition, the law would provide “a clear safeguard to prevent any risk of mass surveillance”.

His co-rapporteur, Dragos Tudorache, said that if the legislation had already been implemented, the French government wouldn’t have been permitted to pass a law this year allowing live facial recognition for crowd surveillance at the 2024 Olympics.

To address the risk of copyright infringement, the legislation will require AI chatbot developers to publish all the works of scientists, musicians, illustrators, photographers, and journalists used in their training. They will also need to demonstrate that everything they did to train the machine was lawful.

Should they fail to do so, developers may have to delete applications immediately or face fines of up to seven per cent of their revenue. “There are plenty of sharp teeth in there,” Mr Tudorache said.

Share icon
Share this article: