For several years now, few articles about artificial intelligence in the popular press are published without being accompanied by a picture of a Terminator robot. The point is clear: artificial intelligence is coming and it is terrifying.
Having sown the seeds of fear, the headline writers are now subtly reinforcing that view.
Take TechCrunch, which claims on its Editorial page to be “delivering top-notch reporting on the business of the tech industry”. This week it covered a story about Google using machine learning algorithms developed by its sibling company, DeepMind, to improve the efficiency of it’s data centres. These algorithms will look after the cooling systems and should deliver energy savings of 30%. This is a really great use of AI, making an expensive process cheaper and being good for the environment too.
But the headline is pure click-bait. Instead of focusing on the positives, the headline reads “Google gives its AI the reins over its data center cooling systems”. It invokes mental images of Skynet, HAL 9000 and even VIKI from I, Robot taking over. Yes, Google has an AI. It’s giving more control to it every day. You should be frightened.
Except it’s not true. And it’s really irritating.
Google doesn’t have an AI. It does have complicated decision making software running its cooling centres, one of the many complex software systems that keep the company alive every day. Most of this decision making is automated using traditional software development techniques. Lately, more of it is using machine learning models to make decisions faster than people could do.
What is irritating about this is that it distracts people from the real problems that AI is causing. Hard social problems such as the potential loss of jobs to automation, the bias inherent in any machine learning algorithms, and the concentration of this immense power in corporate hands with no oversight, are demanding attention.
These problems are complex and require lots of thinking and discussion by people to enable society to address the effects of powerful technology. We are poorly served by click-bait headlines in preparing for the artificial intelligence future.
The Guardian reports on a recent paper by University College London researchers that are using artificial intelligence to predict the outcome of trials at the European Court of Human Rights.
Their approach employs natural language processing (NLP) to build a machine learning model, using the text records from previous trials. As such, it demonstrates the power of modern NLP techniques. Given enough relevant text in a particular area, NLP can discover complex underlying patterns. These patterns are then used to predict an outcome using new case texts.
However, the biggest obstacle to it being used in courts is that it is totally unable to explain why it has made the prediction it has. This problem plagues many machine learning implementations. The underlying mathematics of machine learning models is understood, but not well enough to be able to say for certain why a given input determines a given output. Unlike a human judge.
So for the moment, an AI won’t be determining if you really are innocent or guilty in a court of law.
Chinatopix reports that a new missile with on-board Artificial Intelligence will be deployed by both the U.S. Navy and U.S. Air Force by 2018. The AI will be able to pick out the correct ship to target with a fleet. In addition, the article states that multiple LRASMs can share information, and attack as a swarm.
While not completely autonomous, this nevertheless represents a serious step towards ceding control of ordinance to a machine. Given the current poor understanding of how a lot of machine-learning actually works, this is a dangerous step.
Recently, the United Nations debated such Lethal Autonomous Weapon Systems (LAWS), with many countries pushing for an outright ban. With AI-based missiles in development, the UN and the international community will have to speed up their deliberations in order to prevent such weapons ever being deployed.
Much of the coverage in mass media about artificial intelligence and machine learning tends towards an alarmist position about robots taking over the world. In most examples, there’ll be a picture of the Terminator robot and a reference to the Elon Musk / Stephen Hawking claim that we should be afraid of AI.
I’ve written before that this isn’t really the problem that we should be thinking about. Instead, it’s the simpler narrow AI that will constantly erode jobs by carrying out specific tasks that today requires humans to do.
In an article by Jeff Goodell in Rolling Stone magazine, it appears that this point of view is finally reaching the mainstream press:
In fact, the problem with the hyperbole about killer robots is that it masks the real risks that we face from the rise of smart machines – job losses due to workers being replaced by robots, the escalation of autonomous weapons in warfare, and the simple fact that the more we depend on machines, the more we are at risk when something goes wrong, whether it’s from a technical glitch or a Chinese hacker.
A new report from Bank of America Merrill Lynch has added to the recent spate of analysis predicting a massive impact on work and jobs by robotics and artificial intelligence (as reported in the Guardian).
They estimate that up to 35% of all jobs in the UK (47% in the US) are at risk of displacement by technology within 20 years. This is going to cause a huge shift in the type of work that people can expect to do in the future. It has important implications for education policy, jobs and economic growth. In addition, it is incumbent on politicians and policy makers to ensure that the benefits from increased automation are widely distributed.
A common counter point made is that by eliminating some jobs, technology creates other jobs. However, the authors note:
“The trend is worrisome in markets like the US because many of the jobs created in recent years are low-paying, manual or services jobs which are generally considered ‘high risk’ for replacement,” the bank says.
While 20 years may seem like far into the future, children born this year will just be entering the workforce then. They may be faced with not having any jobs to look forward to.
Professor Stuart Russell is a long term researcher into Artificial Intelligence and wrote (along with Peter Norvig) a widely used undergraduate text called “Artificial Intelligence: A Modern Approach”. Last month he delivered this excellent lecture at The Centre for the Study of Existential Risk. He makes very good arguments that although sentient AI may be far in the future, it is worth considering the ethical possibilities as soon as possible.
It’s quite a long video, but worth the time to watch through.
[Note the sound quality is poor for the first several minutes but is clear thereafter.]
In a sign of things to come, the United Arab Emirates is running a competition to find applications for Artificial Intelligence and Robotics in the areas of Education, Healthcare and Social Services.
The focus on areas that have the potential to improve people’s lives is interesting and timely, given that one of the first major uses for IBM’s Watson is also in Healthcare.
With an award for international entries of US$1m, this competition is likely to attract a lot of interest around the world. Furthermore, with the substantial award, it looks like AI is finally entering the realm of respectability when it comes to attracting investment.
The competition’s organising committee will begin accepting submissions from 15th June, 2015. For more details on the competition, follow the link below.
The basis of his argument is that computers have rapidly progressed in speed, beating the world chess champion Gary Kasparov along the way. As such, they may be able to learn in the future and interact with us as a human would.
If they get that far, then we should be concerned about keeping them under our control. He references that some people in AI are already concerned about how they should be regulated in a similar way that biotechnology is regulated.
There are a few points worth bearing in mind about this call to regulate.
The examples he mentions are all forms of “narrow” Artificial Intelligence (or ANI). Today this is where most AI research is: narrowly focused on specific problem domains. Such AIs are tailored to those problems and don’t work on other problems.
The is a large leap from ANI to AGI (Artificial General Intelligence), which you or I would recognise from its portrayal in numerous films (see The Terminator). Research has not made any significant inroads into creating anything approaching an AGI.
Calls for regulating AGIs are definitely premature. We may equally call for regulation of mining on Jupiter’s moon, Europa, so far away from AGI as we are now.
There is one important step that has been overlooked. ANIs will make a huge impact on society, carrying out specific tasks and jobs that today are carried out by humans.mIt is this that needs to be examined in detail by a wide range of people, including economists, sociologists, and policy makers.
This article in the Guardian Newspaper by John Naughton is a lot less sensationalistic than some recent existential scaremongering. Nevertheless, the underlying argument is just as threatening:
but there’s little doubt that the main thrust of the research is accurate: lots of non-routine, cognitive, white-collar as well as blue-collar jobs are going to be eliminated in the next two decades and we need to be planning for that contingency now.
However, the author is not optimistic about that planning taking place for two reasons, one related to our political short-termism, which ignores anything that has a horizon longer than the next general election, the other related to our innate incapacity for dealing with change.
There are other reasons too. The popular concept of AI is still rooted in science fiction (perhaps due to Hollywood movies). This means that any discussion around it’s impact on day-to-day life may be met with a slight incredulity. Only when AI is everywhere will realization dawn on people that planning is needed.
Another reason for not focusing on planning for change is simple economics. If a corporation can replace a worker with an Artificial Intelligence that can operate around the clock, then it is compelled to do that to grow it’s profits.
In any event, commerce will ensure that the current pace of development of AI and Machine Learning will continue and accelerate. The time left for planning is short.
Any article in the popular press about Artificial Intelligence that includes a picture of the Terminator robot is guaranteed to be focused on one thing alone, building up the fear.
Source: News Corp Australia
According to the article below, Japanese scientists have created a robot that can swordfight. This naturally leads to the suggestion that we are on the brink of having Terminator robots running all over the planet killing at will.
While it makes for a good story, it largely ignores the fact that this robot probably can’t do very much else. This trait is shared by most AI algorithms: they are brittle. This isn’t necessarily a bad thing. Because it is focused on a particular area, an AI can do a lot of good in that area. It’s lack of use in another area lends itself to being controlled. This Artificial Narrow Intelligence is what we have today and is the state of the art.
Terminator robots are examples of Artificial General Intelligence, and we are a very long way away from that.