Call for regulation of Artificial Intelligence is premature

Astronomer Royal Martin Rees says artificial intelligence needs to be regulated in this video below, as reported in the Telegraph newspaper in the United Kingdom.

[Click here to see the video. It’s hosted on Ooyala and I can’t get it to embed properly on my blog. If you know how, please let me know!]

The basis of his argument is that computers have rapidly progressed in speed, beating the world chess champion Gary Kasparov along the way. As such, they may be able to learn in the future and interact with us as a human would.

If they get that far, then we should be concerned about keeping them under our control. He references that some people in AI are already concerned about how they should be regulated in a similar way that biotechnology is regulated.

There are a few points worth bearing in mind about this call to regulate.

  1. The examples he mentions are all forms of “narrow” Artificial Intelligence (or ANI). Today this is where most AI research is: narrowly focused on specific problem domains. Such AIs are tailored to those problems and don’t work on other problems.
  2. The is a large leap from ANI to AGI (Artificial General Intelligence), which you or I would recognise from its portrayal in numerous films (see The Terminator). Research has not made any significant inroads into creating anything approaching an AGI.
  3. Calls for regulating AGIs are definitely premature. We may equally call for regulation of mining on Jupiter’s moon, Europa, so far away from AGI as we are now.

There is one important step that has been overlooked. ANIs will make a huge impact on society, carrying out specific tasks and jobs that today are carried out by humans.mIt is this that needs to be examined in detail by a wide range of people, including economists, sociologists, and policy makers.

The Astronomer Royal asks: ‘Can we keep robots under control?’ – Telegraph.

Advertisement

scikit-learn Machine Learning Algorithm Cheat Sheet

Nice and handy set of cheat sheets for your Machine Learning projects.

Hi, I'm Scott

image

scikit-learn is a Machine Learning library in Python.  This flow chart shows how to do machine learning.

Also see

Microsoft Azure Machine Learning Algorithm Cheat Sheet

image

Dlib C++ Library Machine Learning Algorithm Cheat Sheet

image

View original post

Common sense AI just a decade away?

"Her" starring Joaquin Phoenix
Joaquin Phoenix and his virtual girlfriend in the film Her. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

As someone with a long time passion for Artificial Intelligence, I get very nervous when the popular press (in this case, The Guardian newspaper) present this kind of prediction.

Early AI pioneers made lots of predictions about the possibility of human-like AI being widespread by the turn of the century. Sadly, there were many disappointments along the way which caused a slow-down in progress and drying up of funding.

So much so that even the name “Artificial Intelligence” was discredited and researchers started using other terms, such as Machine Learning, Intelligent Agents and Computational Intelligence to distance themselves from it.

It looks today like significant progress has been made in specific domains, using Artificial Narrow Intelligence. Impressive examples such as Google’s autonomous car cannot be relied on to give an indication of future progress. It’s not at all clear how such machine learning techniques can be developed to have broader application, to create an Artificial General Intelligence.

We must be careful that AI doesn’t over-promise and under-deliver that leads to another Artificial Intelligence Winter.

via Google a step closer to developing machines with human-like intelligence | Science | The Guardian.

Data Mining Algorithms Explained In Plain Language

Here’s a really great resource. Raymond Li from Microsoft has written an explanation of the top 10 data mining algorithms, but in plain language. These algorithms are used a lot in machine learning too.

So if you are confused about Naive Bayes or Support Vector Machines, then take a look at Ray’s easy to understand explanations.

Top 10 data mining algorithms in plain English | rayli.net.

Do Computers Think?

“Cogito, ergo sum (I am thinking, therefore I exist)”, as Rene Descartes famously proposed in 1637, has spawned almost four hundred years of philosophical debate on the nature of thinking and existing.

In this argument, he proposed that the only thing that he was absolutely certain about is that he exists, as he is here to think about existence. This is quite an abstract thought process so when we see the quote below, it naturally suggests computers are capable of similar philosophical musings.

Computers are learning to think, read, and write, says Bloomberg Beta investor Shivon Zilis.

via How Machine Learning Is Eating the Software World.

There have been many debates on the nature of thinking, especially over the last 50 years or so of the computer age. Most famous perhaps is the Chinese Room thought experiment proposed by the philosopher John Searle. His argument proposes that he is locked in a room. He does not speak or read any Chinese and only has a set of instructions which outline what Chinese symbols he is to respond with if he receives a set of Chinese symbols.

From outside the room, if we pass in a correct Chinese sentence or question to John, we will receive a correct response, even though John doesn’t speak Chinese and the instruction book certainly doesn’t either. We are led to deduce (erroneously) that there is a Chinese speaker in the room. (You can find out about his argument and some of the counter-arguments here on Wikipedia.)

Some of the achievements in Machine Learning are indeed impressive. But all such algorithms are the same as the Chinese Room. There is an actor (in this case the computer) carrying out a set of pre-defined instructions. Those instructions are complex and often achieve surprising results. But at no point can we safely deduce that the computer is actually thinking. We may say that today, computers do not think in the way that humans do and that the above quote is a bit of an exaggeration.

As the volume of articles written about Machine Learning and Artificial Intelligence grows, we have to be careful not to unduly overstate the capabilities of these algorithms. We need to avoid the mistakes of the past where much was promised for AI and little delivered, to preserve the interest and funding for research.

Impressive? Yes. Interesting? Definitely. Thinking? No.

The AI Threat: It’s All About Jobs

Google Self Driving Car – Disrupting Transportation and Logistics Industries (Photograph: Justin Sullivan/Getty Images)

This article in the Guardian Newspaper by John Naughton is a lot less sensationalistic than some recent existential scaremongering. Nevertheless, the underlying argument is just as threatening:

but there’s little doubt that the main thrust of the research is accurate: lots of non-routine, cognitive, white-collar as well as blue-collar jobs are going to be eliminated in the next two decades and we need to be planning for that contingency now.

However, the author is not optimistic about that planning taking place for two reasons, one related to our political short-termism, which ignores anything that has a horizon longer than the next general election, the other related to our innate incapacity for dealing with change.

There are other reasons too. The popular concept of AI is still rooted in science fiction (perhaps due to Hollywood movies). This means that any discussion around it’s impact on day-to-day life may be met with a slight incredulity. Only when AI is everywhere will realization dawn on people that planning is needed.

Another reason for not focusing on planning for change is simple economics. If a corporation can replace a worker with an Artificial Intelligence that can operate around the clock, then it is compelled to do that to grow it’s profits.

In any event, commerce will ensure that the current pace of development of AI and Machine Learning will continue and accelerate. The time left for planning is short.

We are ignoring the new machine age at our peril | Comment is free | The Guardian.

Ambient Intelligence

This is an interesting idea: Ambient Intelligence.

“an ever-present digital fog in tune with our behavior and physiological state”

As AI slowly improves in particular domains, we will see the techniques and algorithms incorporated into previously static control systems.

For example, your household heating system could learn the best way to adjust the temperature in the most efficient way, having learned what you like. Other appliances will have some form of learning built in too.

So what we will see will be the widespread adoption of Artificial Narrow Intelligence. There won’t be one Artificial General Intelligence in your house, just lots of narrow ones.

The problem I see with this is the lack of serendipity. What a bland world it would be if we were surrounded by devices which made the world just perfect for us with no surprises. How would we ever be enticed to step outside our comfort zones? We would replicate our “echo chamber” online experience in the real world.

Here’s the full article by Neil Howe (@lifecourse):

Artificial Intelligence Paves The Way For Ambient Intelligence – Forbes.

Clash of the Titans ! (R vs Python)

This is a good comparison of Python and R if you are deciding which to use in your data analytics projects.

Learn Analytics

This is to all out there who are wondering which is better language to learn for data analysis and visualization. Whether one should use R or Python when they do their everyday data analysis tasks.

Both Python and R are amongst the most extensively held languages for data analysis, and have their supporters and opponents. While Python is a lot praised for being a general-purpose language with an easy-to-understand syntax, R’s functionality is developed with statisticians in thoughts, thus giving it field-specific advantages such as excessive features for data visualization.

The DataCamp has recently released a new infographic for everyone interested in how these two (statistical) programming languages relate to each other. This superb infographic discovers what the strengths of R over Python and vice versa, and aims to provide a basic comparison between these two programming languages from a data science and statistics perspective.

R vs Python for data scienceNote:

Not to ignore the…

View original post 54 more words

The Machine Vision Algorithm Beating Art Historians at Their Own Game

This is an interesting application of image processing. The machine learning algorithms are trained on a subset of paintings taken from a data set of more than 80,000. The resulting feature set has over 400 dimensions.

When presented with a painting it has not seen before, it correctly guessed the artist more than 60% of the time. It has also detected additional links between different styles and periods:

It links expressionism and fauvism, which might be expected given that the latter movement is often thought of as a type of expressionism. It links the mannerist and Renaissance styles, which clearly reflects that fact that mannerism is a form of early Renaissance painting.

However, it also apparently confuses certain styles:

… it often confuses examples of abstract expressionism and action paintings, in which artists drip or fling paint and step on the canvas. Saleh and Elgammal [the creators of the ML algorithms] … say that this kind of mix-up would be entirely understandable for a human viewer. “’Action painting’ is a type or subgenre of “abstract expressionism,’” they point out.

Of course, this could also mean that the machine is correct and different “genres” of abstact paintings are completely arbitrary. But what is does highlight is that machine learning has a way to go before it can start offering subjective opinions.

via The Machine Vision Algorithm Beating Art Historians at Their Own Game | MIT Technology Review.