The potential of autonomous systems, robots, and artificial intelligence (AI) to perform tasks currently being carried out by humans is becoming an ever-closer reality, as demonstrated by the growth in Alexa- or Siri-enabled devices and services and the increasing number of online services that now come with some form of chatbot to aid customer interactions. While it may be too soon to consider the need for a legal framework to govern such technology, this is not stopping prominent groups calling for it.
Earlier this month, the European Parliament launched a public consultation on robotics and AI. The consultation comes after the European Parliament's legal affairs committee voted in favor of a resolution calling for new laws regarding robotics and AI to be set out alongside a new voluntary ethical conduct code that would apply to developers and designers, overseen by a dedicated pan-European agency.
It is not only supranational bodies that are taking an interest. National governments are also exploring legislation, even though it is not clear where this might sit within domestic law. In a recent reading of the UK's Digital Economy Bill, there was a tabled amendment that would have given the regulatory body responsible for telecommunications in the UK (Ofcom) the power to "carry out and publish evaluations of algorithms." This was withdrawn when it was pointed out how big an undertaking this would be. However, the UK government is pressing ahead with plans to introduce a "single insurer model" to address how payouts to innocent victims of collisions involving driverless cars should be handled and where liability should lie. Safety-critical systems such as driverless cars will require legislation in order to operate properly; the sooner it is introduced, the easier it will be for the technology to be adopted – without legislation in place, such vehicles will not be allowed onto public highways.
It is no surprise that it is not just public bodies calling for clarity, even though some of those calls are advocating a measured approach. In a recent interview with Quartz, Bill Gates suggested that businesses that adopt robots in place of humans should be taxed at the same rate in an attempt to compensate workers, though it would probably have the added effect of slowing the spread of automation. This would allow governments time to work through the ethical questions and manage the impact on public sector finances.
It is easy to see why people are unsure of robots and AI. In a recent report, Reform, a right-of-center think tank in the UK, estimated that up to 250,000 public sector workers could lose their jobs to robots over the next 15 years. While estimates on adoption vary – you could probably find a forecast to fit your preferred version of the future – it is fair to say that an increasing percentage of employees are seeing (or will see in the near future) parts of their current activities automated, even if it is just administrative tasks. Enterprises are keen to adopt such technology to increase productivity and growth and drive down costs, and it is likely that more and more employees will see an increasing amount of automation in their jobs over the next five years.
We are some way from the traditional view of thinking of when legislation is necessary – once a societal or technological change calls for an adequate legal framework, or once the majority of homes and businesses have an autonomous robot. While these technologies feature increasingly on CIOs' radars, even the most advanced enterprises tend to be at the trialing stage and have yet to fully figure out how best to use the technology.
Should public sector CIOs be concerned that they are suddenly going to be hit by a raft of red tape over their citizen-facing chatbots? This is unlikely, but as pilots mature, the next challenge for CIOs in both the private and public sector will be to start thinking about how to scale up from a pilot to an enterprise-wide implementation. It is then that CIOs – and CEOs – will need to consider changes to policies to accommodate the interaction of staff with virtual staff, robots, and autonomous systems, as well as aligning IT practices and policies with the requirements of the technology. They will also need to think about engagement strategies with staff and customers to ensure concerns are addressed and risks managed.
The ethics, insurance, liability, and standards around algorithms, autonomous systems, AI, and so on are not yet clear, as the market is not mature enough, but pressure will start to mount for greater clarity as pilots increasingly shift to enterprise-wide adoption, and legislative bodies will need to tread a fine line if the industry is not to stall.
Straight Talk is a weekly briefing from the desk of the Chief Research Officer. To receive this newsletter by email, please contact us.