Chinatopix reports that a new missile with on-board Artificial Intelligence will be deployed by both the U.S. Navy and U.S. Air Force by 2018. The AI will be able to pick out the correct ship to target with a fleet. In addition, the article states that multiple LRASMs can share information, and attack as a swarm.
While not completely autonomous, this nevertheless represents a serious step towards ceding control of ordinance to a machine. Given the current poor understanding of how a lot of machine-learning actually works, this is a dangerous step.
Recently, the United Nations debated such Lethal Autonomous Weapon Systems (LAWS), with many countries pushing for an outright ban. With AI-based missiles in development, the UN and the international community will have to speed up their deliberations in order to prevent such weapons ever being deployed.
It’s no surprise that the vast majority of people claim to understand what Artificial Intelligence is. Popular culture has been a key driver of this coupled with the recent debates on the potential benefits and threats of AI, and the visible successes of Google, Facebook and Apple.
However, there is a disconnect between what the latter have achieved and the portrayal in cinema and literature. Almost all modern AI is specifically focused on particular tasks. This is known as Artificial Narrow Intelligence. It is very effective in doing one thing, but completely useless at anything else. Imagine asking Siri to drive your car for you and you’ll get an understanding of how narrow the intelligence is.
By contrast, AI in movies or even that being discussed as an existential threat is Artificial General Intelligence. Such AGIs would be capable of independent action, motivation and autonomy. This type of AI does not exist to any great extent today and in fact seems to be as far away as when Alan Turing wrote about it over 60 years ago.
There is a good chance that the survey respondents are mistaken in their interpretation of Artificial Intelligence. Unfortunately, this mistake may turn into disillusionment when they find out just how far we have to go before there is a real HAL 9000.
Gemma Chan plays Anita, a Synth (synthetic human) in the drama Humans
No, this post isn’t about our species.
If you haven’t seen it yet, Humans is a new Sci-Fi drama currently showing on Channel 4 in the UK and AMC in the United States. It’s set in today’s world; characters use tablets and laptops with today’s level of technology. The key difference is the existence of synthetic humans (“synths”). These are artificially intelligent robots, built to look like humans.
I’ve watched a few episodes and one of the major themes explored is how real humans react to these synths in their lives. Some people are perfectly comfortable, others are completely disturbed by them.
But therein lies the problem that I have with the series. There doesn’t appear to be any intermediate level of technology between what we have today and these fantastically complex robots. As a result, the characters don’t seem to have had any time to habituate to the increasingly complex technology. Given that huge technology gap, the robots are just as alien to the characters as, well, aliens.
Despite this problem though, it’s an interesting drama, exploring complex themes. No doubt as real Artificial Intelligence becomes widespread in many different devices, we’ll see the same kind of reactions, including the tendency for real humans to anthropomorphize our toys.
It is with this principle in mind that I was surprised to see the recent (extraordinary) claim that a robot has become self-aware. Researchers at the Department of Cognitive Science in the Rensselaer Polytechnic Institute in New York made the claim based on the fact that the robot apparently passed a test called the “wise men puzzle”, which is supposed to indicate self-awareness.
Here is the video that the researchers posted on YouTube apparently showing the effect:
It’s always difficult to understand how something works simply by watching its behaviour. We still have difficulty understanding some of the behaviours of animals, despite observing them for many thousands of years.
Indeed, there is the famous case of a mechanism called The Mechanical Turk, which impressed people during the late 18th century by appearing to play a strong game of chess against a human opponent. However, it was exposed later as a hoax. The mechanism allowed a human chess player to hide inside the mechanism and make it appear that the mechanism was playing chess.
I am not suggesting for one moment that the researchers here are perpetrating a hoax. Instead, I am simply using this example to show how easily people can misinterpret things.
But let’s look at the substance of the claims in the video.
The robots in particular are NAO robots from Aldebaran Robotics. These are rather sophisticated programmable robots with sensors that can pick up sound, tactile sensors for touch and cameras for vision. It also has a speaker for, well, generating sounds. It contains an Intel Atom CPU and runs Linux. In essence then, it’s possible to write software (in several different standard languages) for the robot, download and run it.
While the robot looks humanoid and has lots of sensors, its processing capability is no different from your laptop. We can assume then that the researchers wrote a piece of software to carry out the test and added it to the robot.
Now lets look at the particular self-awareness test.
The King’s Wise Men is a logic puzzle in which there are several participants that have been given knowledge about the other participants, but not about themselves. Typically, the puzzle is given in a story form like this (from Wikipedia):
The King called the three wisest men in the country to his court to decide who would become his new advisor. He placed a hat on each of their heads, such that each wise man could see all of the other hats, but none of them could see their own. Each hat was either white or blue. The king gave his word to the wise men that at least one of them was wearing a blue hat – in other words, there could be one, two, or three blue hats, but not zero. The king also announced that the contest would be fair to all three men. The wise men were also forbidden to speak to each other. The king declared that whichever man stood up first and announced (correctly) the color of his own hat would become his new advisor. The wise men sat for a very long time before one stood up and correctly announced the answer. What did he say, and how did he work it out?
It’s fairly straightforward to solve this problem given the rules outlined above by looking at the possible combinations of hats. This kind of reasoning can be programmed using a tree structure and searching the possible combinations for one that is consistent with the information given.
(If you are wondering what the solution is, it’s on the Wikipedia page!)
The puzzle given to the robots to solve is slightly different in format, that relies on sound rather than hat colour. In order to solve the problem, the robot needs to recognize sounds it makes itself (as distinct from sounds that originate elsewhere).
One of the robots deduces that it cannot know the answer given the information it has and says so. However, this changes the environment and adds new information that allows it to find a solution.
At no point in this logical progression is there any hint that the robot is self-aware in the sense that a human would understand it. The robot may contain information that relates to it and to other robots without the necessary leap to self-awareness. We may as well ascribe self-awareness to our laptops that “know” when they have information themselves and when to go out to the Internet to get it (these are normally called caches!).
In fact, the puzzle itself is not a test of self-awareness at all. Humans may solve the puzzle by imagining themselves to be one of the participants, using their own self-awareness to help. But this is not the only way to solve the puzzle.
Impressive though it may appear to be, this extraordinary claim does not have extraordinary proof. We can safely assume that despite the widespread reporting, this robot isn’t really self-aware. When the researchers present their findings at a conference in September, we can have a look at the claims in more detail.
His Highness Sheikh Mohammed bin Rashid Al Maktoum of the UAE
In a sign of things to come, the United Arab Emirates is running a competition to find applications for Artificial Intelligence and Robotics in the areas of Education, Healthcare and Social Services.
The focus on areas that have the potential to improve people’s lives is interesting and timely, given that one of the first major uses for IBM’s Watson is also in Healthcare.
With an award for international entries of US$1m, this competition is likely to attract a lot of interest around the world. Furthermore, with the substantial award, it looks like AI is finally entering the realm of respectability when it comes to attracting investment.
The competition’s organising committee will begin accepting submissions from 15th June, 2015. For more details on the competition, follow the link below.
I wrote recently that the UK Government opposed an international ban on developing autonomous weapons. This was during a recent United Nations debate on ethics and laws that should be created to govern the use of Lethal Autonomous Weapons Systems.
Matthew Hipple (a United States Navy Surface Warfare Officer), writing in War On The Rocks, makes the case that this debate is already too late. He states:
Whether opponents realize it or not, weapon autonomy — to include the choice to kill — will win, and in some cases has already won, the drone debate. The false wall in the public’s understanding between “drones” and existing weapons is publicly cracking. Before long, military necessity will take over. In fact, it already has.
If history has shown us anything, military applications of new technology will be developed. It remains to be seen if we can retain control over those technologies once they become autonomous.
Arguably, the most successful robots to date have been our interplanetary explorers. The twin Mars Exploration Rovers, Spirit and Opportunity far exceeded their operational design time, lasting years instead of weeks.
Voyager 2, launched in 1977, has recently left the solar system, becoming the first man-made object to do so.
V’Ger from Star Trek: The Motion Picture
It’s reasonable to assume that our first interstellar explorer will be followed by many more. Indeed, the first explorer from Earth to reach another planet may well indeed be an AI-powered robot, albeit far in the future. This is far more likely than a human astronaut making the journey.
As noted in the Daily Express article below, this implies that our first contact with extraterrestrials may well be with an Artificial Intelligence. The article is based on a recent paper by Susan Schneider, who suggests that we should be looking for signals from artificial life. Such life may not be dependent on the current requirements for organic life and we could widen the scope of programs like SETI.
It’s also likely that should ET show up at our planet, it would be an AI. Given the recent negative stories on AI, this may not be a good thing. And we don’t even have the Avengers to look after us.