Previously on…..

In part 1 we looked at using linear regression, with the aid of a spreadsheet, to see if we could predict within a reasonable tolerance predict what Darcey Bussell’s scoring would be based on Craig Revel Horwood’s score.

No big deal, it worked quite well, it took less thank five minutes and didn’t interfere with me making a cup of tea. As we concluded from a bit of data application:

y = 0.6769x + 3.031

And all was well.

Time To Up The Ante

Linear Regression is all well and good but this is 2016, this is the year where every Northern Ireland company decides it’s going to do artificial intelligence and machine learning with hardly any data…. So, we’re going to upgrade the Darcey Coefficient and go all Techcrunch/Google/DeepMind on it, Darcey’s predictions are now going to be an Artificial Neural Network!



My sentiments exactly. For the readers of previous posts, both of you, my excitement for neural networks isn’t exactly up there. They’re good but held with a small amount of skepticism. My reasons? Well, like I’ve said before….

One of the keys to understanding the artificial neural network is knowing that the application of the model implies you’re not exactly sure of the relationship of the input and output nodes. You might have a hunch, but you don’t know for sure. The simple fact of the matter is, if you did know this, then you’d be using another machine learning algorithm.

We’ve already got a rough idea how this is going to pan out, the linear regression gave us a good clue. The amount of data we have isn’t huge either, the data set has 476 rows in it. So the error rate of a neural network might actually be larger than what I’d like.

The fun though is in the trying. And in the aid of reputation, ego and book sales well hey it’s worth a look. So I’m going to use the Weka Machine Learning framework as it’s good, solid and it just works.  The neural network can be used for predicting the score or any judge and as Len’s leaving this is perfect opportunity to give it a whirl. For the means of demonstration though I’ll use Darcey’s scores as it follows on from the previous post.

Preparing the Data

We have a csv file but I’ve parred this down so it’s just the scores of Craig, Darcey, Len and Bruno. Weka can import CSV files but I prefer to craft the proper format the Weka likes which is the ARFF file. It spells out the format, the output class we’re expecting to predict on and so on.

@relation strictlycd

@attribute craig numeric
@attribute len numeric
@attribute bruno numeric
@attribute darcey numeric

7,7,7,7..... and so on

Preparing the Project

Let’s have a crack at this with Clojure, there is a reference to the Weka framework in Clojars so this in theory should be fairly easy to sort out. Using leiningen to create a new project let go:

$ lein new darceyneuralnetwork
Warning: refactor-nrepl requires org.clojure/clojure 1.7.0 or greater.
Warning: refactor-nrepl middleware won't be activated due to missing dependencies.
Generating a project called darceyneuralnetwork based on the 'default' template.
The default template is intended for library projects, not applications.
To see other templates (app, plugin, etc), try `lein help new`.

Copy the arff file into the resources folder, or somewhere on the file system where you can find it easily, then I think we’re ready to rock and rhumba.

I’m going to open the project.clj file and add the Weka dependency in, I’m also going to add the clj-ml project too, this is a handy Clojure wrapper for Weka. It doesn’t cover everything but it takes the pain out of some things like loading instances and so on.

(defproject darceyneuralnetwork "0.1.0-SNAPSHOT"
 :description "FIXME: write description"
 :url ""
 :license {:name "Eclipse Public License"
           :url ""}
 :dependencies [[org.clojure/clojure "1.8.0"]
                [clj-ml "0.0.3-SNAPSHOT"]
                [weka "3.6.2"]])

Training the Neural Network

In the core.clj file I’m going to start putting together the actual code for the neural network (no Web API’s here!).

Now a quick think about what we actually need to do. Actually it’s pretty simple with Weka in control of things, a checklist is helpful all the same.

  • Open the arff training file.
  • Create instances from the training file.
  • Set the class index of the training file, ie what we are looking to predict, in this case it’s Darcey’s score.
  • Define a Multilayer Perceptron and set it’s options.
  • Build the classifier with training data.

The nice thing with using Weka with Clojure is we do REPL driven design and do things one line at a time.

The wrapper library has a load-instances function and takes the file location as a URL.

darceyneuralnetwork.core> (wio/load-instances :arff "file:///Users/jasonbell/work/dataissexy/darceyneuralnetwork/resources/strictlydata.arff")
#object[weka.core.Instances 0x2a5c3f7 "@relation strictlycd\n\n@attribute craig numeric\n@attribute len numeric\n@attribute bruno numeric\n@attribute darcey numeric\n\n@data\n2,5,5,5\n5,6,4,5\n3,5,4,4\n4,6,6,7\n6,6,7,6\n7,7,7,7\n6,7,7,6\n3,5,4,5\n5,6,5,5\n8,7,7,8\n5,7,5,5\n3,5,5,5\n6,6,7,8\n4,4,5,5\n6,6,6,6\n7,7,7,6\n6,6,6,6\n3,5,5,6\n6,7,7,6\n2,4,4,5\n7,8,8,7\n8,8,8,8\n5,5,5,5\n6,5,5,6\n7,6,7,6\n3,5,5,5\n7,6,6,6\n5,6,6,6\n4,7,6,5\n3,6,4,6\n3,5,4,6\n4,5,4,4\n7,7,8,7\n8,8,8,8\n7,6,6,7\n6,6,6,7\n7

Okay, with that working I’m going to add it to my code.

(defn load-instance-data [file-url]
 (woi/load-instances :arff file-url))

Note the load-instances function expects a URL so make sure you filename does begin with “file:///” otherwise it will throw an exception.

With training instances dealt with in one line of code (gotta love Clojure, it would take three in Java) we can now look at the classifier itself. So the decision is to use a Neural Network, in this instances a Multilayer Perceptron. In Java it’s a doddle, in Clojure even more so:

darceyneuralnetwork.core> (classifier/make-classifier :neural-network :multilayer-perceptron)
#object[weka.classifiers.functions.MultilayerPerceptron 0x77bf80a0 ""]

It doesn’t actually do anything yet but there’s a classifier ready and waiting. We have to define which class (Craig, Len, Bruno or Darcey) we wish to classify, so Darcey is number 3. Weka needs to know what you are trying to classify otherwise it will throw an exception.

(data/dataset-set-class instances 3)

Now we can train the model.

darceyneuralnetwork.core> (classifier/classifier-train ann ds)
#object[weka.classifiers.functions.MultilayerPerceptron 0x1b800b32 "Linear Node 0\n Inputs Weights\n Threshold -0.005224665277369991\n Node 1 1.161165780729305\n Node 2 -1.0681086084010063\nSigmoid Node 1\n Inputs Weights\n Threshold -2.5314445242321613\n Attrib craig 1.3343684436571155\n Attrib len 1.290973083908637\n Attrib bruno 1.1941270206738404\nSigmoid Node 2\n Inputs Weights\n Threshold -1.508477761092395\n Attrib craig -0.73817374973773\n Attrib len -0.7490868020959697\n Attrib bruno -1.3714589018840246\nClass \n Input\n Node 0\n"]

The output is showing the input node weights. All looks good. We have a neural network that can predict Darcey’s score based on the other three judges scores.

Remember this is all within the REPL, back to my code now and I can craft a function to train a neural network.

(defn train-neural-net [training-data-filename]
 (let [instances (load-instance-data training-data-filename)
       neuralnet (classifier/make-classifier :neural-network :multilayer-perceptron)]
   (data/dataset-set-class instances 3)
   (classifier/classifier-train neuralnet instances)))

All it does is create the steps I did in the REPL: load the instances, create a classifier, select the class to classify and then train the neural network.

Giving it a dry run we run it as so from the REPL.

darceyneuralnetwork.core> (def training-data "file:///Users/jasonbell/work/dataissexy/darceyneuralnetwork/resources/strictlydata.arff")
darceyneuralnetwork.core> (def nnet (train-neural-net training-data))
darceyneuralnetwork.core> nnet
#object[weka.classifiers.functions.MultilayerPerceptron 0x100e60b7 "Linear Node 0\n Inputs Weights\n Threshold -0.005224665277369991\n Node 1 1.161165780729305\n Node 2 -1.0681086084010063\nSigmoid Node 1\n Inputs Weights\n Threshold -2.5314445242321613\n Attrib craig 1.3343684436571155\n Attrib len 1.290973083908637\n Attrib bruno 1.1941270206738404\nSigmoid Node 2\n Inputs Weights\n Threshold -1.508477761092395\n Attrib craig -0.73817374973773\n Attrib len -0.7490868020959697\n Attrib bruno -1.3714589018840246\nClass \n Input\n Node 0\n"]

All good then. So we’ve crafted a piece of code pretty quickly to train a neural network. I’d like to save the model so I don’t have to go through the pain of training it everytime I want to use it. The utils.clj has a function to serialize the model to a file.

(utils/serialize-to-file nnet output-filename)

The nice thing with Weka is the process is the same for most of the different machine learning types.

  • Load the instances
  • Create a classifier
  • Set the output class
  • Train the model
  • Save the model

So let’s park that there, we have build a neural network. Time to move to predicting some scores.  If you want to have a look at the code I’ve put it up on Github.

Predicting with the Neural Network

With our model (rough, ready and in need of refinement) we can do some predicting. It’s just a case of creating a new instance based on the training instance and running it against the neural network to get a score.

The make-instance function will take a defined instance type and apply data from a vector to create a new instance. Then it’s a case of running that against the model with the classifier-classify function.

darceyneuralnetwork.core> (def to-classify (data/make-instance instances [8 8 8 0]))
darceyneuralnetwork.core> (classifier/classifier-classify model to-classify)

So we have a score, if we rounded it up we’d get an 8. Which is about right. If Craig were to throw a hum dinger of a score in the model performs well under the circumstances.

darceyneuralnetwork.core> (def to-classify (data/make-instance instances [3 8 8 0]))
darceyneuralnetwork.core> (classifier/classifier-classify model to-classify)

Let’s remember this model is created with defaults, it’s far from perfect and with the amount of data we have we it’s not a bad effort.  There’s more we can do but I can hear the kettle.

In Part 3…..

Yes, there’s a part 3. We’ll take this model and have a go at some tweaking and refining to make the predictions even better.