Eliza was an early artificial intelligence that simulated a psychiatrist. The user could interact with it giving answers to questions posed.
Although quite simple, it did manage to fool some people into thinking that it was a real person, just like a Turing Test. It’s what first interested me in artificial intelligence. This is a much more modern approach.
I wonder how many people will be fooled this time?
Agreed that it’s possible the AI would use the search engine as memory or repository. That would probably make the AI the interface itself that people would use. That’s fascinating and would have a major impact on the away we use search engines.
I wonder how that would affect the results returned and the monetisation via advertising. Would the AI also be responsible for targeting ads?
There has been a flurry of comment recently related to the supposed growing menace of Artificial Intelligence.
Stephen Hawking, Elon Musk, Bill Gates and most recently, Steve Wozniak have all sounded warnings about computers taking over and artificial intelligences plotting to get rid of us.
What’s caused the sudden volume of opinion on this? Perhaps it is the apparent success of highly visible projects like Google’s self driving cars promising reduced risk transport? After all, if an Artificial Intelligence is capable of driving itself around, how long before it can do other things that traditionally would need human level intelligence?
Viewpoints like these are good for generating lots of comment, but the reality is somewhat more mundane. The AI in Google’s car isn’t a single entity but is rather a collection of different techniques and algorithms. As such, it doesn’t behave like a general intelligence, but is specific to that domain. This has been the case for as long as researchers have been looking at AI: it’s brittle.
Nevertheless, AI techniques are indeed spreading into new spheres of life all the time. So the societal impact will be profound. But can we please discuss this without resorting to fear and hyperbole.