Artificial Intelligence to replace staff at O2

AI will eliminate jobs. Will new ones be created in time?
AI will eliminate jobs. Will new ones be created in time?

Studies have previously claimed that Artificial Intelligence is likely to replace jobs in the medium term.

This may have been a little optimistic in estimating the timeframe that it would likely occur.

Earlier this year, Fukoku Mutual Life Insurance announced that it would be replacing 34 employees, whose jobs involved calculating payouts to policyholders.

This week, the mobile phone company O2 announced that it would be launching a voice recognition AI next year that would be able to do the same job as customer service staff.

At Mobile World Congress in Barcelona, O2’s parent Telefonica presented the system. The Independent’s coverage of it pedicts:

It’s expected to launch in the UK next year, and will enable the company to cut customer service costs.

Cutting customer service costs can only mean a reduction in jobs. Coupled with their idea to sell the customer’s data, it looks like the AI system will help Telefonica turn what is a business cost into a revenue stream, all with less employees to worry about.

Source: Artificial intelligence is set to handle O2 customer services from 2017 | The Independent

AI ‘judge’ doesn’t explain why it reaches certain decisions

The Guardian reports on a recent paper by University College London researchers that are using artificial intelligence to predict the outcome of trials at the European Court of Human Rights.

Their approach employs natural language processing (NLP) to build a machine learning model, using the text records from previous trials. As such, it demonstrates the power of modern NLP techniques. Given enough relevant text in a particular area, NLP can discover complex underlying patterns. These patterns are then used to predict an outcome using new case texts.

However, the biggest obstacle to it being used in courts is that it is totally unable to explain why it has made the prediction it has. This problem plagues many machine learning implementations. The underlying mathematics of machine learning models is understood, but not well enough to be able to say for certain why a given input determines a given output. Unlike a human judge.

So for the moment, an AI won’t be determining if you really are innocent or guilty in a court of law.

Source: Artificial intelligence ‘judge’ developed by UCL computer scientists | Technology | The Guardian


In Praise Of Reinventing The Wheel

The original Ferris Wheel at the 1893 World Columbian Exposition in Chicago (Source: Public Domain via Wikipedia)


There is a strong principle in software engineering of reusing code wherever possible. It’s considered so important, that it’s got its own TLA: DRY (“Don’t Repeat Yourself”). In other words, don’t reinvent the wheel.

There are obvious benefits to this. Nobody wants to type the same piece of code over and over. So write it once as a function and reuse it wherever you need to, freeing yourself up to build other stuff. Better still, reuse someone else’s functions.

Indeed, the ready availability of open-source libraries and frameworks for almost every popular requirement under the sun, means that you almost never have to develop your own low level software. Thank goodness. Software development would get really tedious if you had to start out writing basic I/O routines with each new project.

Even new developers can get sophisticated projects up and running fairly quickly. Github and StackOverflow have seen to that.

The principle is taken to an extreme by some web developers, who include the entire jQuery library on their webpage in order to implement some simple form validation. I’m all in favour of simplifying JavaScript development, second only to abolishing it altogether. But including the whole library is a tad overkill.

And this highlights a problem with treating DRY as dogma.

When we rely on existing libraries and frameworks, we don’t get a good idea of what’s really happening within our code base. Yes, we can read the library code to get the gist of what’s going on. But we never really do.

There’s a really good reason to try to implement your own routines, if only in your test projects. You’ll get an appreciation of the lower level algorithms that your applications depend on. You may even discover how difficult it is to get some of these algorithms to run quickly or to use a small amount of memory.

This will lead you to the core principles of algorithmic analysis that are fundamental to computer science. Along the way, you’ll pick up a really good sense of what works well and what doesn’t, what’s fast and what’s slow, what’s memory efficient and what’s a complete hog. Hopefully, you can reflect those lessons in your own production application code.

So the next time you have a chance, try to implement your own I/O routines or other low level algorithms and see how you compare to other published libraries.

Remember, DRY is a principle, not a dogma.

Sometimes there is merit in repeating yourself (or someone else).


Lethal Autonomous Weapon Systems are on the way

Long Range Anti-Ship Missile (LRASM)

Chinatopix reports that a new missile with on-board Artificial Intelligence will be deployed by both the U.S. Navy and U.S. Air Force by 2018. The AI will be able to pick out the correct ship to target with a fleet. In addition, the article states that multiple LRASMs can share information, and attack as a swarm.

While not completely autonomous, this nevertheless represents a serious step towards ceding control of ordinance to a machine. Given the current poor understanding of how a lot of machine-learning actually works, this is a dangerous step.

Recently, the United Nations debated such Lethal Autonomous Weapon Systems (LAWS), with many countries pushing for an outright ban. With AI-based missiles in development, the UN and the international community will have to speed up their deliberations in order to prevent such weapons ever being deployed.

Source: LRASM, First US Missile with Artificial Intelligence, to be Deployed by 2018 : Science : Chinatopix

Using Xcode with Github

You’ve found a nice open-source project you want to play with on GitHub. You’ve cloned it to your own repository and use Xcode 7 as your development environment. How do you make Xcode and GitHub play nicely with each other?

Turns out that Xcode has some nice features built in so that you can work directly with your GitHub-based code. To get started, open up the Preferences pane under the Xcode menu. Select the “Source Control” tab:

Source Control Preferences

Make sure that the “Enable Source Control” option is checked. Then select the Accounts tab:

Github Accounts

Click on the “+” at the bottom of the pane on the left. Select “Add Repository”. The following pane has several fields that you need to fill in.

  • Address: This is the URL of the repository. You can get this by clicking on the green “Clone or Download” button on the GitHub website.
  • Type: Choose “Git”
  • Authentication: Choose “User Name and Password”
  • User Name: Enter your GitHub user name
  • Password: Enter your GitHub password

I’ve added Google’s Protobuf and my own clone of TensorFlow in the example above.

Close the Preferences pane and select the “Source Control” menu.

Source Control Menu

This menu contains the controls you need to manage branches, commits and Pull Requests as necessary.

Xcode also lets you compare versions of code. In the top right of the main editor window, there is an icon with two arrows. Click on that and select “Comparison”. Xcode will show you the current version of your code against one in a different branch. You can choose which branch to compare against by clicking on the branch symbol below each editor pane.

Xcode Compare

I’ve literally scratched the surface here with using GitHub in Xcode 7. But it looks like it’s a straightforward way to play with the many open-source projects hosted there.



White House launches workshops to prepare for Artificial Intelligence

The White House. Photo: By Zach Rudisin (Own work) [CC BY-SA 3.0 (, via Wikimedia Commons
It looks like Artificial Intelligence has really gone mainstream with the White House taking notice and starting to act.

Today, we’re announcing a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.

After the recent high-profile successes of AI, it was inevitable that politicians should take notice. AI will have a massive impact on society and now is the time to start looking at that impact.

Source: Preparing for the Future of Artificial Intelligence |

Disruption to jobs the likely effect of Artificial Intelligence

Much of the coverage in mass media about artificial intelligence and machine learning tends towards an alarmist position about robots taking over the world. In most examples, there’ll be a picture of the Terminator robot and a reference to the Elon Musk / Stephen Hawking claim that we should be afraid of AI.

I’ve written before that this isn’t really the problem that we should be thinking about. Instead, it’s the simpler narrow AI that will constantly erode jobs by carrying out specific tasks that today requires humans to do.

In an article by Jeff Goodell in Rolling Stone magazine, it appears that this point of view is finally reaching the mainstream press:

In fact, the problem with the hyperbole about killer robots is that it masks the real risks that we face from the rise of smart machines – job losses due to workers being replaced by robots, the escalation of autonomous weapons in warfare, and the simple fact that the more we depend on machines, the more we are at risk when something goes wrong, whether it’s from a technical glitch or a Chinese hacker.

via Inside the Artificial Intelligence Revolution: Pt. 1 | Rolling Stone

The last point is intriguing too. Machine Learning models are particularly opaque. Researchers understand how the algorithms learn. What’s not clear is why they are so effective in doing what they do.

But the main point is that progress won’t stop AI being deployed to do the work of people. The questions now are how will it be managed and how will society adapt?


2015 Underhanded C Contest Results Released

I’ve just found out that the results of the 2015 Underhanded C Contest have been published and (YIPPEE!!), I managed to bag a runner-up spot.

This year, the competition was sponsored by the Nuclear Threat Initiative. The goal of the contest was to fake the results of a test of a nuclear warhead to show that a warhead contained fissile material, when in fact it did not.

Congratulations to the winner, Linus Åkesson. You can read about his entry on his blog. It’s also worth reading about the other entrants’ ideas for how to hide malicious code inside normal looking code.

Here is the NTI’s article about the contest:

Lastly, if you’re curious about my entry, I’ve posted the code and an explanation on GitHub:


Solving XOR with a Neural Network in TensorFlow

The tradition of writing a trilogy in five parts has a long and noble history, pioneered by the great Douglas Adams in the Hitchhiker’s Guide to the Galaxy. This post is no exception and follows from the previous four looking at a Neural Network that solves the XOR problem.

This time, I had a interest in checking out Google’s machine learning system, TensorFlow.

Last November, Google open sourced the machine learning library that it uses within their own products. There are two API’s; one for Python and one for C++.  Naturally, it makes sense to see what TensorFlow would make of the same network that we previously looked at and compare both Python-based neural networks.

You can download the TensorFlow library and see some of the tutorials here.

It’s worth noting that TensorFlow requires a little mental agility to understand. Essentially. the first few lines of code set up the inputs, the network architecture, the cost function, and the method to use to train the network. Although these look like the same steps as the steps in Python or Octave, they don’t in fact do anything. This is because TensorFlow considers those to be the model to use, running them only within a session. This model is in the form of a directed graph. Don’t worry if that’s not too clear yet.

TensorFlow Code

To start with, we need to load in the TensorFlow library:

import tensorflow as tf

The next step is to set up placeholders to hold the input data. TensorFlow will automatically fill them with the data when we run the network. In our XOR problem, we have four different training examples and each example has two features. There are also four expected outputs, each with just one value (either a 0 or 1). In TensorFlow, this looks like this:

x_ = tf.placeholder(tf.float32, shape=[4,2], name="x-input")
y_ = tf.placeholder(tf.float32, shape=[4,1], name="y-input")

I’ve set up the inputs to be floating point numbers rather than the more natural integers to avoid having to cast them to floating points when multiplying the weights later on. The shape parameter tells the placeholder what the dimensions are of data we’ll be passing in.

The next step is to set up the parameters for the network. These are called Variables in TensorFlow.  Variables will be modified by TensorFlow during the training steps.

Theta1 = tf.Variable(tf.random_uniform([2,2], -1, 1), name="Theta1")
Theta2 = tf.Variable(tf.random_uniform([2,1], -1, 1), name="Theta2")

For our Theta matrices, we want them initialized to random values between -1 and +1, so we use the built-in random_uniform function to do that.

In TensorFlow, we set up the bias nodes separately, but still as Variables. This let’s the algorithms modify the values of the bias node. This is mathematically equivalent to having a signal value of 1 and initial weights of 0 on the links from the bias nodes.

Bias1 = tf.Variable(tf.zeros([2]), name="Bias1")
Bias2 = tf.Variable(tf.zeros([1]), name="Bias2")

Now we set up the model. This is pretty much the same as that outlined in the previous posts on Python and Octave:

A2 = tf.sigmoid(tf.matmul(x_, Theta1) + Bias1)
Hypothesis = tf.sigmoid(tf.matmul(A2, Theta2) + Bias2)

Here, matmul is TensorFlow’s matrix multiplication function, and sigmoid naturally is the sigmoid calculation function.

As before, our cost function is the average over all the training examples:

cost = tf.reduce_mean(( (y_ * tf.log(Hypothesis)) + 
        ((1 - y_) * tf.log(1.0 - Hypothesis)) ) * -1)

So far, that has been relatively straightforward. Let’s look at training the network.

TensorFlow ships with several different training algorithms, but for comparison purposes with our previous implementations, we’re going to use the gradient descent algorithm:

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cost)

What this statement says is that we’re going to use GradientDescentOptimizer as our training algorithm, the learning rate (alpha from before) is going to be 0.01 and we want to minimize the cost function above. This means that we don’t have to implement our own algorithm as we did in the previous examples.

That’s all there is to setting up the network. Now we just have to go through a few initialization steps before running the examples through the network:

XOR_X = [[0,0],[0,1],[1,0],[1,1]]
XOR_Y = [[0],[1],[1],[0]]

init = tf.initialize_all_variables()
sess = tf.Session()

As I mentioned above, TensorFlow runs a model inside a session, which it uses to maintain the state of the variables as they are passed through the network we’ve set up. So the first step in that session is to initialise all the Variables from above. This step allocates values to the various Variables in accordance with how we set them up (i.e. random numbers for Theta and zeros for Bias).

The next step is to run some epochs:

for i in range(100000):, feed_dict={x_: XOR_X, y_: XOR_Y})

Each time the training step is executed, the values in the dictionary feed_dict are loaded into the placeholders that we set up at the beginning. As the XOR problem is relatively simple, each epoch will contain the entire training set. To see what’s going on inside the loop, just print out the values of the Variables:

if i % 1000 == 0:
        print('Epoch ', i)
        print('Hypothesis ',, feed_dict={x_: XOR_X, y_: XOR_Y}))
        print('Theta1 ',
        print('Bias1 ',
        print('Theta2 ',
        print('Bias2 ',
        print('cost ',, feed_dict={x_: XOR_X, y_: XOR_Y}))

That’s it. If you run this in Python, you’ll get something that looks like this after 99000 epochs:


As you can see in the display for the Hypothesis variable, the network has learned to output nearly correct values for the inputs.


The TensorFlow Graph

I mentioned above that the model was in the form of a directed graph. TensorFlow let’s us see what that graph looks like:


We can see that our inputs x-input and y-input are the starts of the graph, and that they flow through the processes at layer2 and layer3, ultimately being used in the cost function.

To see the graph yourself, TensorFlow includes a utility called TensorBoard. Inside your code, before the statement add the following line:

writer = tf.summary.FileWriter("./logs/xor_logs", sess.graph_def)

The folder in the quotes can point to a folder on your machine where you want to store the output. Running TensorBoard is then as simple as entering this at a command prompt:

$ tensorboard --logdir=/path/to/your/log/file/folder

In your browser, enter http://localhost:6006 as the URL and then click on Graph tab. You will then see the full graph.

So that’s an example neural network in TensorFlow.

There is one thing I did notice in putting this together: it’s quite slow. In Octave, I was able to run 10,000 epochs in about 9.5 seconds. The Python/NumPy example was able to run in 5.8 seconds. The above TensorFlow example runs in about 28 seconds on my laptop.

TensorFlow is however built to run on GPUs and the model can be split across multiple machines. This would go some way to improving performance, but there is a way to go to make it comparable with existing libraries.

UPDATE 26/01/2016:

I’ve placed the full code up on GitHub here:


Update 07/03/2016:

I’ve changed the GitHub code to run a more realistic 100,000 epochs. Should converge now. [Thanks Konstantin!]