As the applications of artificial intelligence (AI) continue to grow, it’s only right that policymakers around the globe continue to examine regulation. We’re proud to say we have been involved with helping to shape the new proposed EU Artificial Intelligence Act, working with the Commission and other stakeholders, with the aim of 'provid[ing] AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI.'
What is the proposed legal-framework?
At its core, the proposed model is risk-based. Meaning the level of imposed regulation reflects the level of risk posed by the AI system in question. There are four designations:
Unacceptable risk
Systems that are deemed to be a ‘clear threat to the safety, livelihoods and rights of people’. These would be banned outright and include applications such as government social scoring.
High risk
These would be subject to strict obligations including implementing risk assessment and mitigation systems and activity logging, and include applications such as: transportation, exam scoring, and credit scoring. Notably this group also includes ‘all remote biometric identification systems’.
Limited risk
These AI systems would be subject to specific transparency obligations, for example AI-powered chatbots would be obligated to make users aware they were interacting with a machine.
Minimal or no risk
Applications such as AI-powered video games or spam filters would be designated minimal or no risk – they would not be regulated.
We welcome this approach – however, its successful implementation is reliant upon policymakers having a clear understanding of how AI technology works and across different use cases, both existing and future-facing. Otherwise, there is a real danger that rules may inadvertently become disproportionate, stifle innovation, and even foreclose important markets.
We believe that there are some clear areas where further development is needed – in particular how the framework applies in practice to particular technologies. For example, the current draft may consign relatively low-risk technologies, used exclusively to combat fraud, to the high-risk category due to the definition of ‘remote biometric identification systems’.
This would resultantly make consumer protection services more complex and costly to provide, limit their effectiveness, or even drive them out of the market.
What’s Onfido’s focus?
In order to prevent this from happening, our focus is on consulting with policymakers – explaining how fraud prevention technologies such as Onfido’s operate, including the proportional risk to consumers if unregulated, and the effectiveness of the protection it offers.
We’re pleased to see some of the recent amendments we have championed introduced by leading Members of the European Parliament, particularly:
- A recognition that solutions designed to combat fraud and protect consumers against fraudulent activities should not be considered high risk.
- An understanding of the differences between biometric identification, verification and authentication – so systems used to consensually build trust between businesses and customers, for the purposes of fraud prevention and KYC compliance, are not treated the same as systems that have the potential to be used for mass surveillance.
What happens next?
There’s still a long way to go in the legislative process. Rightly so, it will take time to draft, discuss and ratify this legislation. Especially as this is the first framework of its kind – the rest of the world is watching what happens in Brussels, and it will certainly influence how other policymakers approach AI regulation in the future.
At Onfido, we will continue to work with policymakers to help educate on the different types of biometric identification, verification and authentication so the framework builds trust and confidence for consumers and businesses.