A great question for the philosophers: if a machine or even a piece of software behaves in an intelligent manner, then does it need to be accorded rights (and responsibilities)?
One of my favourite episodes of Star Trek: The Next Generation deals exactly with this issue. In “The Measure of a Man”, Lt. Commander Data (who is an android) is the subject of a judicial enquiry to decide if he (it?) is property of Starfleet or an individual.
In the end, the Judge Advocate General rules that Data is indeed an individual, using the reasoning that she is clearly unable to determine if he is not, and a greater injustice would be served if she ruled that he was property.
Perhaps this will be our best approach. If an AI demonstrates a near-human level of intelligence, then we will have to assume that it is deserving of legal protections.
However, it’s a thorny question and one which is bound to occupy biological minds for a long time to come.
I have been a huge science fiction fan as long as I can remember, and a recurring theme in both science fiction literature and movies is the creation of artificialintelligence. However, the subject is becoming increasingly more science and less fiction.
One of the earliest references to a robot, or an automaton, is in The Iliad, written by Homer some time around 700 B.C. More recent examples include Isaac Asimov’s Three Laws of Robotics, written all the way back in 1942; Arthur C. Clarke’s AI gone rogue in 2001: A Space Odyssey; and the persecution of androids in Philip K. Dicks’ Do Androids Dream of Electric Sheep (better known as the movie Blade Runner).
View original post 521 more words