Like it or not, artificial intelligence (AI) is here and it's here to stay. What's more, AI will continue to become more advanced and play a central role in shaping our future. While human opinions toward AI advancement couldn't be more diverse – from "dooms day" Terminator predictions, to AI achievements deemed as "astonishing feats", to anything and everything in between – ideally machines, free of many of the physical constraints on human intelligence, will replace tasks and not jobs. In this regard, AI will augment human performance – helping humans cooperate with humans and enabling humans to concentrate on the areas where humans are intrinsically better than machines, such as devising strategy and expressing creativity and empathy – instead of replacing or dominating humans completely.
Like any fast-growing new technology, guidelines on AI are limited, but both AI critics and enthusiasts welcome "smart regulations" for this space. While the notion of regulations are often met with frustration from business, perhaps a better position to take is one of law influencer than outlaw.
For example, the European Commission (EC) recently issued its "Draft Ethics guidelines for trustworthy AI", under its Digital Single Market program. This working document constitutes a draft of the AI Ethics Guidelines presented by the EC's High-Level Expert Group on Artificial Intelligence (AI HLEG). The consultation ended on 1 Feb 2019 with over 500 comments received. These comments are currently being analyzed and considered by the AI HLEG for the preparation of a revised version of the Ethics Guidelines that will be delivered to the EC by the beginning of April 2019.
Furthermore, on the occasion of Data Protection Day on 28 Jan 2019, the Consultative Committee of the Council of Europe Convention for the Protection of Individuals with regard to the Processing of Personal Data (Convention 108) published "Guidelines on Artificial Intelligence and Data Protection". The guidelines aim to assist policy makers, AI developers, manufacturers and service providers in ensuring that AI applications do not undermine the right to data protection.
These guidelines also address the new challenges induced by the development of AI that need to be faced. As the Convention’s committee stated, “personal data have increasingly become both the source and the target of AI applications.” Additionally, they are “largely unregulated and often not grounded on fundamental rights.” The adoption of a legal framework by the Council of Europe aims thus “to favor the development of technology grounded on these rights” and which are “not merely driven by market forces or high-tech companies.”
The Convention's committee underlines that the protection of human rights, including the right to protection of personal data, should be an essential pre-requisite when developing or adopting AI applications, in particular when they are used in decision-making processes, and be based on the principles of the updated data protection convention, Convention 108+, opened for signature in October 2018. In addition, any innovation in the field of AI should pay close attention to avoiding and mitigating the potential risks of the processing of personal data, and should allow meaningful control by data subjects over the data processing and its effects.
These AI guidelines refer to important issues previously identified in the Guidelines on the Protection of Individuals with regard to the Processing of Personal Data in a World of Big Data and to the necessity “to secure the protection of personal autonomy based on a person’s right to control his or her personal data and the processing of such data, the nature of this right to control should be carefully addressed” in this context.
General Data Protection Regulation (GDPR) Recital 71 Article (2) states that data subjects have the "right to rectify automated profiling" or the "right for an explanation", and heavily penalizes companies that cannot provide an explanation and record as to how a decision has been reached, whether by a human or computer.
Finally, the UK Information Commission’s Office's (ICO) publication on "Big Data, artificial intelligence, machine learning and data protection" covers topics on fairness, transparency, purpose limitation, data minimization, accuracy, rights of individuals, security, accountability and governance. The implications here are not barriers. It's not a case of Big Data "or" data protection, or Big Data "versus" data protection. Privacy is not an end in itself, it is an enabling right. Embedding privacy and data protection into Big Data analytics enables, not only societal benefits, but also organizational benefits such as creativity, innovation and trust. In short, it enables AI to do all the good things it can do.
It's Up to Us
The direction in which AI progresses will depend on us. While many stakeholders caution us to be careful of what we wish for, as we could lose control of our fate with disastrous consequences, perhaps we are living in both the most exciting and dangerous times in human history. To maximize opportunities and minimize risk, we'll need to continue to learn, engage, educate and influence regulators to be relevant, effective and forward thinking. Action must be taken now, before it's too late!
In our white paper, "ARTIFICIAL INTELLIGENCE: The End of the World or the Dawn of Limitless Possibility", we explore potential routes in which AI may progress and how to best harness AI's unbridled power. I encourage you to take some time to review this piece, or as always, please feel free to reach out to me directly at Jessica.Santos@kantar.com to discuss healthcare market research and how Kantar can help you improve data quality in your organization.