A great question for the philosophers: if a machine or even a piece of software behaves in an intelligent manner, then does it need to be accorded rights (and responsibilities)?
One of my favourite episodes of Star Trek: The Next Generation deals exactly with this issue. In “The Measure of a Man”, Lt. Commander Data (who is an android) is the subject of a judicial enquiry to decide if he (it?) is property of Starfleet or an individual.
In the end, the Judge Advocate General rules that Data is indeed an individual, using the reasoning that she is clearly unable to determine if he is not, and a greater injustice would be served if she ruled that he was property.
Perhaps this will be our best approach. If an AI demonstrates a near-human level of intelligence, then we will have to assume that it is deserving of legal protections.
However, it’s a thorny question and one which is bound to occupy biological minds for a long time to come.