Artificial intelligence (AI) and machine-learning is changing the way products, systems and organisations operate across many sectors, from healthcare and robotics, to marketing and data analytics.
Although it provides an array of benefits, AI also introduces data privacy risks. Simply put, AI is, prima facie, at odds with principles of data protection.
Multi-jurisdictional regulations such as the GDPR, and regional laws such as the UK’s Data Protection Act 2018 (DPA 2018), govern the processing of personal data. To impact-assess an AI system against these laws would result in familiar privacy themes being explored. For example, personal data must be processed lawfully, fairly and transparently by the AI system; and mitigation of privacy risks posed to data subjects by AI must be implemented.
AI raises privacy concerns because it often maximises personal data processing in order to learn. This may go beyond what the law says is ‘adequate, relevant and limited’ to what is necessary in order to carry out the data processing.
The science-fiction fantasy of machines making decisions on their own, that adversely impact the human population, is here. Regrettably, profiling in order to predict behaviour, leading to actions that result in bias, has followed.
The danger is that the data set or algorithm that has been used to train the machine to learn may be inherently discriminatory. The UK’s Equality Act 2010 applies across various industry sectors and lists protected characteristics that, by extension, an AI system must not discriminate against. These include pregnancy and maternity, gender reassignment and age, along with GDPR ‘special category’ favourites such as race and religion. As a result of a discriminatory algorithm, it is reasonably foreseeable that the automated processing that follows, will be more likely to be biased, resulting in prejudice. If such activity or omission is done carelessly, and with injury to the data subject a likely result, then a suit under the law of tort may also be a consideration in addition to penalties and/or proceedings under data privacy law.
AI system designers will do well to ensure that personal data is processed commensurate with the reasonable expectations of the data subject, and respects their legitimate interest. Beyond that, there must be transparency on the metrics used by the system, in order that the automated decision reached can be clearly explained to the data subject.
Under the GDPR and the DPA 2018, the data subject’s rights of information, access and objection, including the right not to be subject to a solely automated decision, leaves AI system providers with a high priority privacy checklist to adhere to.
As AI and machine-learning increasingly becomes commonplace, data privacy will be key to building trust and confidence with consumers. The sizeable fines permitted under both the GDPR and the DPA 2018 should serve as a warning to AI solution providers that the rights of data subjects must be protected.
It is recommended that organisations who design AI into their products:
- Minimise the amount of personal data used during development, e.g. by processing synthetic data at the training stage.
- Ensure that algorithms and designs are available for external audit.
- Monitor and impact-assess the AI system both at the start, and on a continuous basis.
- Explain how the AI system functioned to reach a decision.
- Embed ethics by default and design into the solution.
- Ensure that data subjects are aware that a decision has been wholly automated.
Compliance with data protection legislation is vital to reducing the risk of data breaches, and meeting the expectation of customers, regulators and public alike.
For more information on how Southwood can help your business to comply with data privacy legislation and associated services, please contact [email protected]
Written by James Okoro, CIPM, FIP, CIPP/E, LL.B, Barrister – Director, Southwood Management Solutions Limited