The Problem With Humans

Gemma Chan plays Anita, a Synth (synthetic human) in the drama Humans

No, this post isn’t about our species.

If you haven’t seen it yet, Humans is a new Sci-Fi drama currently showing on Channel 4 in the UK and AMC in the United States. It’s set in today’s world; characters use tablets and laptops with today’s level of technology. The key difference is the existence of synthetic humans (“synths”). These are artificially intelligent robots, built to look like humans.

I’ve watched a few episodes and one of the major themes explored is how real humans react to these synths in their lives. Some people are perfectly comfortable, others are completely disturbed by them.

But therein lies the problem that I have with the series. There doesn’t appear to be any intermediate level of technology between what we have today and these fantastically complex robots. As a result, the characters don’t seem to have had any time to habituate to the increasingly complex technology. Given that huge technology gap, the robots are just as alien to the characters as, well, aliens.

Despite this problem though, it’s an interesting drama, exploring complex themes. No doubt as real Artificial Intelligence becomes widespread in many different devices, we’ll see the same kind of reactions, including the tendency for real humans to anthropomorphize our toys.

Humans is currently showing on AMC in the United States and Channel 4 in the United Kingdom.

Keep calm. The robot isn’t self-aware (probably).

Marcello Truzzi was a professor of sociology and a founder of the Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) during the 1970s. He is credited with originating a tenet of rational scepticism:

Extraordinary claims require extraordinary proof.

It is with this principle in mind that I was surprised to see the recent (extraordinary) claim that a robot has become self-aware. Researchers at the Department of Cognitive Science in the Rensselaer Polytechnic Institute in New York made the claim based on the fact that the robot apparently passed a test called the “wise men puzzle”, which is supposed to indicate self-awareness.

Here is the video that the researchers posted on YouTube apparently showing the effect:

It’s always difficult to understand how something works simply by watching its behaviour. We still have difficulty understanding some of the behaviours of animals, despite observing them for many thousands of years.

Indeed, there is the famous case of a mechanism called The Mechanical Turk, which impressed people during the late 18th century by appearing to play a strong game of chess against a human opponent. However, it was exposed later as a hoax. The mechanism allowed a human chess player to hide inside the mechanism and make it appear that the mechanism was playing chess.

I am not suggesting for one moment that the researchers here are perpetrating a hoax. Instead, I am simply using this example to show how easily people can misinterpret things.

But let’s look at the substance of the claims in the video.

The robots in particular are NAO robots from Aldebaran Robotics. These are rather sophisticated programmable robots with sensors that can pick up sound, tactile sensors for touch and cameras for vision. It also has a speaker for, well, generating sounds. It contains an Intel Atom CPU and runs Linux. In essence then, it’s possible to write software (in several different standard languages) for the robot, download and run it.

While the robot looks humanoid and has lots of sensors, its processing capability is no different from your laptop. We can assume then that the researchers wrote a piece of software to carry out the test and added it to the robot.

Now lets look at the particular self-awareness test.

The King’s Wise Men is a logic puzzle in which there are several participants that have been given knowledge about the other participants, but not about themselves. Typically, the puzzle is given in a story form like this (from Wikipedia):

The King called the three wisest men in the country to his court to decide who would become his new advisor. He placed a hat on each of their heads, such that each wise man could see all of the other hats, but none of them could see their own. Each hat was either white or blue. The king gave his word to the wise men that at least one of them was wearing a blue hat – in other words, there could be one, two, or three blue hats, but not zero. The king also announced that the contest would be fair to all three men. The wise men were also forbidden to speak to each other. The king declared that whichever man stood up first and announced (correctly) the color of his own hat would become his new advisor. The wise men sat for a very long time before one stood up and correctly announced the answer. What did he say, and how did he work it out?

It’s fairly straightforward to solve this problem given the rules outlined above by looking at the possible combinations of hats. This kind of reasoning can be programmed using a tree structure and searching the possible combinations for one that is consistent with the information given.

(If you are wondering what the solution is, it’s on the Wikipedia page!)

The puzzle given to the robots to solve is slightly different in format, that relies on sound rather than hat colour. In order to solve the problem, the robot needs to recognize sounds it makes itself (as distinct from sounds that originate elsewhere).

One of the robots deduces that it cannot know the answer given the information it has and says so. However, this changes the environment and adds new information that allows it to find a solution.

At no point in this logical progression is there any hint that the robot is self-aware in the sense that a human would understand it. The robot may contain information that relates to it and to other robots without the necessary leap to self-awareness. We may as well ascribe self-awareness to our laptops that “know” when they have information themselves and when to go out to the Internet to get it (these are normally called caches!).

In fact, the puzzle itself is not a test of self-awareness at all. Humans may solve the puzzle by imagining themselves to be one of the participants, using their own self-awareness to help. But this is not the only way to solve the puzzle.

Impressive though it may appear to be, this extraordinary claim does not have extraordinary proof. We can safely assume that despite the widespread reporting, this robot isn’t really self-aware. When the researchers present their findings at a conference in September, we can have a look at the claims in more detail.

Professor Margaret Boden’s talk at the Centre for the Study of Existential Risks

Pandora's Brain

downloadProfessor Boden has been in the AI business long enough to have worked with John McCarthy and some of the other founders of the science of artificial intelligence. During her animated and compelling talk to a highly engaged audience at CSER in Cambridge last month, the sparkle in her eye betrayed the fun she still gets from it.

The main thrust of her talk was that those who believe that an artificial general intelligence (AGI) may be created within the next century are going to be disappointed. She was at pains to emphasise that the project is feasible in principle, but she offered a series of examples of things which AI systems cannot do today, which she is convinced they will remain unable to do for a very long time, and perhaps forever.

Professor Boden likes to laugh, and she likes to make other people laugh. Her first example concerned two…

View original post 256 more words