Coexisting safely with artificial intelligence

Dr. Jessica Santos takes a deeper look at the future of AI.
Download the report
robot hand
Dr. Jessica Santos
Dr. Jessica

Senior Director, Global Compliance and Quality, UK

Artificial Intelligence (AI) is all around us! We talk to Alexa-like devices, increasingly rely on automation, and will soon be driven around by driverless cars. In healthcare, AI is helping to discover new medicines and delivering a host of patient-centered technologies -- such biometric monitors, remote physician consultations and adherence assistance applications. Most importantly, the possibilities for continued AI innovation are limitless.

However, as machines become increasingly "intelligent" and better than humans at designing even smarter machines, the key question is what will this mean for mankind? And, critically, what are we doing to ensure a safe and worthwhile coexistence with such machines?


In healthcare market research, the inability to explain decisions made by AI programs is a major problem for data quality. This inability to understand how AI does what it does also stops AI from being further deployed in areas such as law, healthcare and within enterprises that handle sensitive customer data. Understanding how data is handled, how AI has reached a certain decision, and ultimately who is accountable are major unsolved challenges.

Furthermore, AI trained on wrong or unfiltered data can certainly make bad decisions. However, worse than that, current deep learning systems can sometimes give us confidently wrong answers, and provide limited insight into why they have come to specific decisions. This is what concerns me most as a healthcare market researcher. It’s okay to be wrong, but it’s not okay to be confidently wrong. The key to solving this dilemma is how we deal with uncertainty – the uncertainty of messy and missing data, and the uncertainty of predicting what might happen next. Uncertainty is not a good thing, as it’s something we debate endlessly and can’t fight by ignoring it. The entity who ultimately makes decision on uncertainty will be held most accountable, but who will that be?


To address this, we must determine who is telling AI its narratives? Whose stories, and which stories, will inform how AI interacts with the world? Which novels are being chosen to "teach" AI morality? What kind of writers are being enlisted to script AI–human interaction? If we can create more diverse literary and cinematic AI narratives, this can enhance the research and improve the language and data that feeds into actual AI systems. By paying closer attention to what stories are doing and how they are doing it, it doesn’t destroy the power they have – it helps us understand and appreciate that power even more. For example, imagine if we want AI to handle resource allocation decisions in our health system. It might accomplish this more fairly and efficiently than humans, with immense benefits for patients and taxpayers. However, to be successful we’d need to specify its goals correctly.


The biggest promise of AI is self-learning. But how can we be sure that AI will learn good and sensibly ONLY? That's because self learning also means learning bias, selfish demands, unending desires, and lack of happiness. A society driven by consumerism, celebrity worship, video games and social media gossip, and with indifference to massive social problems, creates human bias that contaminates AI systems. How do we ensure that AI learns only the good from us instead of everything?

There are two big problems with this utopian vision. One, is how do we get the machines started on this journey? And two, what would it mean to reach this destination? The "getting started" problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever "it" actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views. The "destination" problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

Human Exceptionalism

Maybe what we are wishing for is privacy, security, safety, transparency, reliability and ultimately trust in the AI development, as well as the convenience and better quality of life that it can bring to us. If we are to give general intelligence to machines, we’ll need to give them moral authority too. That means a radical end to human exceptionalism.

In our white paper, "ARTIFICIAL INTELLIGENCE: The End of the World or the Dawn of Limitless Possibility", we explore potential routes in which AI may progress and how to best harness AI's unbridled power. 


Over 1500 scientific and peer-reviewed articles to further understanding in health research.
Search the database