The age of the driverless bus is coming – and we can't let developers

Nov 10, 2017 - It's a bit like buses. You wait for one new technology to come along and then three arrive, presenting a range of exciting journeys and ...
193KB taille 0 téléchargements 1 vues
The age of the driverless bus is coming – and we can't let developers take sole control The Guardian, Friday 10 November 2017

It’s a bit like buses. You wait for one new technology to come along and then three arrive, presenting a range of exciting journeys and destinations, full of promises and possibilities. With rapid developments in genomics; in data and computer science; in neuroscience; and in the combinations that their convergence make possible, it is easy to feel simultaneously confused, excited and anxious. And at the centre of it all and supposedly orchestrating our future – driving the driverless bus, you might say – we have artificial intelligence (AI). Moving quickly in this area is Google’s DeepMind with their multi-million dollar AI initiative, but they are not alone: there is also great interest from academia and huge investment from other parts of industry. The potential benefits of these developments are becoming clear, and, in principle at least, we might welcome and work towards them. There are likely to be applications in many sectors – in service and manufacturing industries, in leisure and communications, in education, and, of course, in healthcare. Why would we not want to employ smart technologies to get faster and more reliable diagnosis and clinical evaluation, and more individually-tailored treatment? But with the main stimulus for development coming from the commercial sector, a key question to ask is whether societal goals will be kept in sight. In fact, there are a number of issues here that we will want to think about. Sticking to the healthcare sector, we will need to consider, for example, the implications of AI-based decision-making for informed consent where the algorithms are obscure, or for how we see responsibilities for decisions and outcomes when machines have been involved. And what this might mean for the patient/practitioner relationship when we know that the doctor could also be in the dark, or not involved at all, in making a diagnosis or issuing a prescription. While a robot carrying out surgery might seem alarming, if they become better at it than surgeons, is that not a good thing? Those who are developing and hoping to implement such systems will have to work with a wide range of people to better understand what society’s priorities are; what norms and expectations condition our acceptance and support; and what would cross the line into exploitation, abuse or simply unfair commercialisation. So these are not just tech issues. The ways in which technological developments address these questions will mean that considerations such as privacy, solidarity, justice, and transparency will need to be built into the wider systems, so that the environment will be one that can explicitly demonstrate inclusivity, openness, good governance and opportunities for redress where interests are harmed. This discussion is not entirely new, of course. Elon Musk, not known to be afraid of technology development, has been expressing his anxieties about AI, and the need for governance and regulation, for quite a while. And the Commons Science and Technology Committee published a report on robotics and AI just over a year ago. It proposed “a standing commission on artificial intelligence be established … to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the government of any regulation required on limits to its progression.” The bus is coming. Let us get on board, but let us take charge of where it is going.