Life of Francis of Assisi by José Benlliure y Gil (Source Wikipedia)
Lots of recent writing on Artificial Intelligence is focused on the far off vista of human-level AI powered robots in everyday life. The main philosophical questions examine the age old human questions of sentience, autonomy, free will and self determination. We can now add spirituality to that list, strange as it may seem.
“…it stands to reason that AI will be able to teach us a thing or two about what it means to follow God.” Rev. Christopher Benek
On the face of it, this is rather strange to me. Today’s AIs are very mathematical and use well understood techniques (even if those techniques are sometimes surprisingly powerful but lacking theoretical understanding). So if eventually a human-level AI is created, we can be certain that we’ll be able to understand exactly what it is and how it works. But it won’t be following God.
This reminds me of a scene in the film Contact. Jodie Foster plays radio astronomer Eleanor (Ellie) Arroway, who discovers an encoded message being broadcast from an alien civilization in deep space. Invited to a briefing in the White House by Rachel Constantine (Angela Bassett) to discuss this, she is very surprised by the opinions of Richard Rank, head of a conservative Christian organisation (played by Rob Lowe). At that point in the movie, the message itself consisted of a series of prime numbers accompanied by a large number of pages of engineering schematics. Their dialogue goes like this:
Rank: “My problem is this. The content of the message is morally ambiguous at best…”
Arroway: “This is nuts.”
Rank (forcefully): “Excuse me, Miss. We know nothing of these creatures’ values. The fact of the matter is we don’t even know if they believe in God.“
Arroway: “This doesn’t make any sense. If you were to ask…”
Constantine: “Excuse me Dr. Arroway. We won’t be suppressing any opinions here today.”
Although correct to point out that different people’s opinions should not be suppressed, it may also be the case that not all opinions have equal merit. This is particularly true when it comes to interpreting a set of mathematical equations, engineering schematics or algorithms.
The scientist Arroway and the religious Rank have such completely different perspectives and understanding of the same thing, they can’t even communicate properly.
But to state that an AI composed of algorithms would “participate in the process of discipleship and spiritual formation that is inherent to Christ’s redemptive purposes” – well, to quote Arroway: This doesn’t make any sense.
It is with this principle in mind that I was surprised to see the recent (extraordinary) claim that a robot has become self-aware. Researchers at the Department of Cognitive Science in the Rensselaer Polytechnic Institute in New York made the claim based on the fact that the robot apparently passed a test called the “wise men puzzle”, which is supposed to indicate self-awareness.
Here is the video that the researchers posted on YouTube apparently showing the effect:
It’s always difficult to understand how something works simply by watching its behaviour. We still have difficulty understanding some of the behaviours of animals, despite observing them for many thousands of years.
Indeed, there is the famous case of a mechanism called The Mechanical Turk, which impressed people during the late 18th century by appearing to play a strong game of chess against a human opponent. However, it was exposed later as a hoax. The mechanism allowed a human chess player to hide inside the mechanism and make it appear that the mechanism was playing chess.
I am not suggesting for one moment that the researchers here are perpetrating a hoax. Instead, I am simply using this example to show how easily people can misinterpret things.
But let’s look at the substance of the claims in the video.
The robots in particular are NAO robots from Aldebaran Robotics. These are rather sophisticated programmable robots with sensors that can pick up sound, tactile sensors for touch and cameras for vision. It also has a speaker for, well, generating sounds. It contains an Intel Atom CPU and runs Linux. In essence then, it’s possible to write software (in several different standard languages) for the robot, download and run it.
While the robot looks humanoid and has lots of sensors, its processing capability is no different from your laptop. We can assume then that the researchers wrote a piece of software to carry out the test and added it to the robot.
Now lets look at the particular self-awareness test.
The King’s Wise Men is a logic puzzle in which there are several participants that have been given knowledge about the other participants, but not about themselves. Typically, the puzzle is given in a story form like this (from Wikipedia):
The King called the three wisest men in the country to his court to decide who would become his new advisor. He placed a hat on each of their heads, such that each wise man could see all of the other hats, but none of them could see their own. Each hat was either white or blue. The king gave his word to the wise men that at least one of them was wearing a blue hat – in other words, there could be one, two, or three blue hats, but not zero. The king also announced that the contest would be fair to all three men. The wise men were also forbidden to speak to each other. The king declared that whichever man stood up first and announced (correctly) the color of his own hat would become his new advisor. The wise men sat for a very long time before one stood up and correctly announced the answer. What did he say, and how did he work it out?
It’s fairly straightforward to solve this problem given the rules outlined above by looking at the possible combinations of hats. This kind of reasoning can be programmed using a tree structure and searching the possible combinations for one that is consistent with the information given.
(If you are wondering what the solution is, it’s on the Wikipedia page!)
The puzzle given to the robots to solve is slightly different in format, that relies on sound rather than hat colour. In order to solve the problem, the robot needs to recognize sounds it makes itself (as distinct from sounds that originate elsewhere).
One of the robots deduces that it cannot know the answer given the information it has and says so. However, this changes the environment and adds new information that allows it to find a solution.
At no point in this logical progression is there any hint that the robot is self-aware in the sense that a human would understand it. The robot may contain information that relates to it and to other robots without the necessary leap to self-awareness. We may as well ascribe self-awareness to our laptops that “know” when they have information themselves and when to go out to the Internet to get it (these are normally called caches!).
In fact, the puzzle itself is not a test of self-awareness at all. Humans may solve the puzzle by imagining themselves to be one of the participants, using their own self-awareness to help. But this is not the only way to solve the puzzle.
Impressive though it may appear to be, this extraordinary claim does not have extraordinary proof. We can safely assume that despite the widespread reporting, this robot isn’t really self-aware. When the researchers present their findings at a conference in September, we can have a look at the claims in more detail.
Astronomer Royal Martin Rees says artificial intelligence needs to be regulated in this video below, as reported in the Telegraph newspaper in the United Kingdom.
[Click here to see the video. It’s hosted on Ooyala and I can’t get it to embed properly on my blog. If you know how, please let me know!]
The basis of his argument is that computers have rapidly progressed in speed, beating the world chess champion Gary Kasparov along the way. As such, they may be able to learn in the future and interact with us as a human would.
If they get that far, then we should be concerned about keeping them under our control. He references that some people in AI are already concerned about how they should be regulated in a similar way that biotechnology is regulated.
There are a few points worth bearing in mind about this call to regulate.
The examples he mentions are all forms of “narrow” Artificial Intelligence (or ANI). Today this is where most AI research is: narrowly focused on specific problem domains. Such AIs are tailored to those problems and don’t work on other problems.
The is a large leap from ANI to AGI (Artificial General Intelligence), which you or I would recognise from its portrayal in numerous films (see The Terminator). Research has not made any significant inroads into creating anything approaching an AGI.
Calls for regulating AGIs are definitely premature. We may equally call for regulation of mining on Jupiter’s moon, Europa, so far away from AGI as we are now.
There is one important step that has been overlooked. ANIs will make a huge impact on society, carrying out specific tasks and jobs that today are carried out by humans.mIt is this that needs to be examined in detail by a wide range of people, including economists, sociologists, and policy makers.
The recent movie The Imitation Game, starring Benedict Cumberbatch, brought the work of Alan Turing to a much wider audience than was previously the case.
The movie has an interesting scene where Alan discusses the concept of an intelligent machine with the policeman who arrested him. He asks if you would consider a machine to be intelligent if a human couldn’t tell the difference between it and another human when conversing with it.
He published this idea in a paper in 1950 and he called the scenario The Imitation Game. Later this became known as the Turing Test.
Every year, a competition is held to test real world Artificial Intelligence algorithms to see if they can pass the Turing Test. This competition is called the Loebner Prize. Since the competition started in 1991, no algorithm has managed to pass the test.
This year, the competition will be held at Bletchley Park, where Alan Turing worked on breaking the German Enigma Code. The deadline for applications is 1st July 2015 and the final will be held on 19th September.
You can read Turing’s original paper proposing the test here:
The quote below sounds suspiciously like the rationale the fictional Dr. John McKittrick used in the movie WarGames (1983) to let the War Operation Plan Response (WOPR) play World War III but have the decisions made by humans.
…the algorithm should be able to perform extremely efficient image recognition as well as natural language understanding processes. The final word belongs still to people but the intelligent algorithm should be there to avoid a “human error”.
The problem with that approach is that “the people” will probably follow the AI as they won’t be in a position to dispute it’s conclusions. This will be especially true if the AI is extremely efficient at image recognition and natural language processing.
In WarGames, the point was clearly made that in the event of a Soviet attack, there wouldn’t be time for humans to evaluate the courses of action available and they would probably follow the AI’s recommendations.
As we come to rely more and more on machine learning and AI algorithms, we will need to establish trust that they are running correctly for us to follow their suggestions.
Given the detailed mathematics, complex algorithms and huge datasets, it is going to be a real challenge for them to show the level of transparency required to establish that necessary trust.
Ryan Calo (@rcalo) writes an interesting piece in Forbes (see link below) about conferring human rights on Artificial Intelligences, a theme Alex Garland has been discussing during the promotion of his new film Ex Machina.
He rightly points out that we can’t do this without radical changes to our laws and institutions. He mentions the right to reproduce as something that, should the AI choose to exercise this right, we could be overwhelmed by.
He also states:
There is reason to believe we will never be able to recreate so-called strong artificial intelligence.
This is something that is still under debate, so it’s not at all decided that we cannot create Strong AI. However, we are a long way from even generating a rudimentary Artificial General Intelligence.
Nevertheless, it is worth considering the hypothetical rights that could be granted to an AI. Clearly, an AI is not a human in the strict biological sense. But should this distinction mean an AI should be treated as a lesser entity, even if it demonstrates all the sentience of a human? This is uncomfortably close to the way slaves were treated in our past and is sure to be a hot topic of debate.
In any event, history has shown that politics and laws always trail behind technologies. Should this continue to be the case, we will attain Strong AI long before the case AI vs. State reaches the inside of a courtroom.
“Where do we draw the line between our devices and ourselves?”
This is a really interesting experimental art piece (via the New York Times) on a fictional life-coach that gets to know you from your interaction with an app on your phone. Interaction with it exposes the user’s psyche to examination and builds rapport with the app.
In some ways it’s reminiscent of the artificial intelligence operating system “Samantha” in Spike Jonze film “Her”. Jaoquin Phoenix’s character develops a relationship with Samantha, even though it (she?) is just a piece of software.
It will be available shortly in the Apple App Store and is sure to be an interesting download.