Functional programming for deep learning

Author: Joyce Xu

Before I started my most recent job at ThinkTopic, the concepts of “functional programming” and “machine learning” belonged to two different worlds entirely. One was a programming paradigm surging in popularity as the world turned towards simplicity, composability, and immutability to maintain complex scaling applications; the other was a tool to teach computers to autocomplete doodles and make music. Where was the overlap?

The more I worked with the two, the more I began realizing that the overlap is both practical and theoretical. Firstly, machine learning is not a stand-alone endeavor; it needs to be rapidly incorporated into complex scaling applications in industry. Secondly, machine learning — and deep learning in particular — is functional by design. Given the right ecosystem, there are several compelling reasons to perform deep learning in an entirely functional manner:

  • Deep learning models are compositionalFunctional programming is all about composing chains of higher-order functions to operate over simple data structures. Neural nets are designed the same way, chaining together function transformations from one layer to the next to operate over a simple matrix of input data. In fact, the entire process of deep learning can be viewed as optimizing a set of composed functions, meaning the models themselves are intrinsically functional.
  • Deep learning components are immutable.When functions operate over the input data, the data is not changed, a new set of values are outputted and passed on. Furthermore, when weights are updated, they do not need to be “mutated” — they can just be replaced by a new value. In theory, the updates to the weights can be applied in any order (i.e. they are not dependent on one another), so there is no need to keep track of a sequential, mutable state.
  • Functional programming offers easy parallelism. Most importantly, functions that are pure and composable are easy to parallelize. Parallelism means more speed and more compute power. Functional programming gives us concurrency and parallelism at essentially no cost, making it much easier to work with large, distributed models in deep learning.

There are many theories and perspectives regarding the combination of functional programming and deep learning, from mathematical arguments to practical overviews, but sometimes it’s most convincing (and useful) just to see it in practice. Here at ThinkTopic, we’ve been developing an open-source machine learning library called Cortex. For the rest of this post, I will introduce some ideas behind functional programming and put them to use in a Cortex deep learning model for anomaly detection.

Clojure Basics

Before we continue on our Cortex tutorial, I want to introduce some basics of Clojure. Clojure is a functional programming language that’s really good at two things: concurrency and data processing. Fortunately for us, both of those things are incredibly useful for machine learning. In fact, one of the primary reasons we use Clojure for machine learning is the fact that day-to-day work in preparing datasets for training (data manipulation, processing, etc.) can easily outweigh the work of implementing the algorithms, especially when we have a solid library such as Cortex for learning. Using Clojure and .edn (instead of C++ and protobuf), we can gain leverage and velocity on ML projects.

For a more in-depth introduction to the language, take a look at the community guide here.

On with the basics: Clojure code is made up of a bunch of expressionsthat are evaluated at run-time. These expressions are wrapped in parentheses, and are typically treated as function calls.

(+ 2 3)          ; => 5
(if false 1 0)   ; => 0

There are 4 basic collection data structures: vectors, lists, hash-maps, and sets. Commas are treated as whitespace, so they are typically omitted.

[1 2 3]            ; vector (ordered)
‘(1 2 3)           ; list (ordered)
{:a 1 :b 2 :c 3}   ; hashmap or map (unordered)
#{1 2 3}           ; set (unordered, unique values)

The single quote in front of the list simply prevents it from being evaluated as an expression.

Clojure also comes with many, many, built-in functions to operate over these data structures. Part of the beauty of Clojure is that it was designed to have many functions for very few data types, as opposed to to having a few specialized functions for each of many data types. Being an FP language, Clojure supports higher-order functions, meaning functions can be passed around as arguments to other functions.

(count [a b c])              ; => 3

(range 5)                    ; => (0 1 2 3 4)

(take 2 (drop 5 (range 10))) ; => (5 6)

(:b {:a 1 :b 2 :c 3})        ; use keyword as function => 2

(map inc [1 2 3])            ; map and increment => (2 3 4)

(filter even? (range 5))     ; filter collection based off predicate => (0 2 4)

(reduce + [1 2 3 4])         ; apply + to first two elements, then apply + to that result and the 3rd element, and so forth => 10

Of course, we can also write our own functions in Clojure, using defn. Clojure function definitions follow the form (defn fn-name [params*] expressions), and they always return the value of the last expression in the body.

(defn add2
[x]
(+ x 2))(add2 5)     ; => 7

let expressions create and bind variables within the lexical scope of the “let”. That is, in of the expression (let [a 4] (…)), the variable “a” takes on a value of 4 inside (and only inside) the inner parentheses. These variables are called “locals.”

(defn square-and-add
[a b]
(let [a-squared (* a a)
b-squared (* b b)]
(+ a-squared b-squared)))

(square-and-add 3 4)       ; => 225

That’s it for the basics! Now that we’ve learned some Clojure, let’s put the fun in functional programming and get back to some ML.

Cortex

Cortex is written in Clojure, and is currently one of the largest and fastest-growing machine learning libraries that uses a functional programming language. The rest of this post will walk through how to build a state-of-the-art classification model in Cortex, and the functional programming paradigms and data augmentation techniques required to do so.

Data Preprocessing

Our dataset is going to be the credit card fraud detection data provided by Kaggle here. It turns out this dataset is incredibly imbalanced, containing only 492 positive fraud cases out of 284,807. That’s 0.172%. This is going to cause problems for us later, but first let’s just take a look at the data and see how the model does.

In order to ensure anonymity of personal data, all the original features except “time” and “amount” have already been transformed to PCA components (where each entry represents a new variable that contains the most relevant information from the raw data). A little data exploration will show that the first “time” variable is fairly uninformative, so we’ll drop that as we’re reading in the data. Here is what our initial code looks like:

 

(ns fraud-detection.core

(:require [clojure.java.io :as io]

[clojure.string :as string] [clojure.data.csv :as csv] [clojure.core.matrix :as mat] [clojure.core.matrix.stats :as matstats]

[cortex.nn.layers :as layers]

[cortex.nn.network :as network]

[cortex.nn.execute :as execute]

[cortex.optimize.adadelta :as adadelta]

[cortex.optimize.adam :as adam]

[cortex.metrics :as metrics]

[cortex.util :as util]

[cortex.experiment.util :as experiment-util]

[cortex.experiment.train :as experiment-train]))

(def orig-data-file “resources/creditcard.csv”)

(def log-file “training.log”)

(def network-file “trained-network.nippy”)

;; Read input csv and create a vector of maps {:data […] :label [..]},

;; where each map represents one training instance in the data

(defonce create-dataset

(memoize

(fn []

(let [credit-data (with-open [infile (io/reader orig-data-file)]

(rest (doall (csv/read-csv infile))))

data (mapv #(mapv read-string %) (map #(drop 1 %) (map drop-last credit-data))) ; drop label and time

labels (mapv #(util/idx->one-hot (read-string %) 2) (map last credit-data))

dataset (mapv (fn [d l] {:data d :label l}) data labels)]

dataset))))

 

Cortex neural nets expect input data in the form of maps, such that each map represents a single labeled data point. For example, a classification dataset could look like [{:data [12 10 38] :label “cat”} {:data [20 39 3] :label “dog“} … ]. In our create-dataset function, we read in the csv data file, designate all but the last column to be the “data” (or features), and designate the last column to be the labels. In the process, we turn the labels into one-hot vectors (e.g. [0 1 0 0]) based on the classification class, because the last softmax layer of our neural net returns a vector of class probabilities, not the actual label. Finally, we create a map from these two variables and return it as the dataset.

Model Description

Creating a model in Cortex is fairly straightforward. First, we’re going to define a map of hyper-parameters to be used later during training. Then, to define a model, we simply string the layers together:

 

(def params

{:test-ds-size      50000 ;; total = 284807, test-ds ~= 17.5%

:optimizer         (adam/adam)   ;; alternately, (adadelta/adadelta)

:batch-size        100

:epoch-count       50

:epoch-size        200000})

(def network-description

[(layers/input (count (:data (first (create-dataset)))) 1 1 :id :data) ;width, height, channels, args

(layers/linear->relu 20) ; num-output & args

(layers/dropout 0.9)

(layers/linear->relu 10)

(layers/linear 2)

(layers/softmax :id :label)])

network-description is a vector of neural network layers. Our model consists of:

  • an input layer
  • a fully-connected (linear) layer with the ReLU activation function
  • a dropout layer
  • another fully-connected ReLU layer
  • an output layer of size 2 that is passed through the softmax function.

In both the first and the last layers, we need to specify an :id. This id refers to the key in the data map that our network should look at. (Recall that the data map looks like {:data […] :label […]}). For our input layer, we pass in the :data id to tell the model to grab the training data for its forward passes. In our final network layer, we provide :label as the :id, so the model can use the true label to calculate our error with.

Training and Evaluation

Here’s where it gets a little more difficult. The train function itself is actually not so complicated — Cortex provides a nice, high-level call for training, so all we have to do is pass in our parameters (the network, training and testing dataset, etc.). The only caveat is that that the system expects an effectively “infinite” dataset for training, but Cortex provides a function (infinite-class-balanced-dataset) to help us transform it.

(defn train

“Trains network for :epoch-count number of epochs”

[]

(let [network (network/linear-network network-description)

[train-orig test-ds] (get-train-test-dataset)

train-ds (experiment-util/infinite-class-balanced-dataset train-orig

:class-key :label

:epoch-size (:epoch-size params))]

(experiment-train/train-n network train-ds test-ds

:batch-size (:batch-size params)

:epoch-count (:epoch-count params)

:optimizer (:optimizer params)

:test-fn f1-test-fn)))

The complicated part is the f1-test-fn. Here’s the thing: during training, the train-n function expects to be provided with a :test-fnthat evaluates how well the model is performing and determines whether or not it should be saved as the “best network.” There is a default test function that evaluates cross-entropy loss, but this loss value is not so easy to interpret, and it doesn’t suit our imbalanced dataset very well. To get around this problem, we’re going to write our own test function.

But how are we going to test the performance of the model? The standard metric in classification tasks is accuracy, but in a dataset as imbalanced as ours, accuracy is a fairly useless metric. Because positive (fraudulent) examples account for just 0.172% of our dataset, even a model that exclusively predicts negative examples would achieve 99.828% accuracy. 99.828% is a pretty darn good accuracy, but if Amazon really used this model, we may as well all turn to a life of crime and credit card fraud.

Thankfully, Amazon does not use this kind of model, and neither shall we. A much more telling set of metrics is precision, recall, and the F1 (or more generally F-beta) score.

1 1In layman’s terms, precision asks the question: “of all the examples I guessed were positive, what proportion were actually positive?” and recall asks the question: “of all the examples that were actually positive, what proportion did I correctly guess as positive?”

The F-beta score (a generalization of the traditional F1 score) is a weighted average of precision and recall, also measured on a scale of 0 to 1:

2.jpg 2

When beta = 1, we get the standard F1 measure of 2 * (precision * recall) / (precision + recall). In general, beta represents how many times more important recall should be than precision. For our fraud detection model, we’ll use the F1 score as our high score to track, but we’ll log the precision and recall scores as well to check the balance. This is our f1-test-fn:

 

(defn f-beta

“F-beta score, default uses F1”

([precision recall] (f-beta precision recall 1))

([precision recall beta]

(let [beta-squared (* beta beta)]

(* (+ 1 beta-squared)

(try                         ;; catch divide by 0 errors

(/ (* precision recall)

(+ (* beta-squared precision) recall))

(catch ArithmeticException e

0))))))

(defn f1-test-fn

“Test function that takes in two map arguments, global info and local epoch info.

Compares F1 score of current network to that of the previous network,

and returns map:

{:best-network? boolean

:network (assoc new-network :evaluation-score-to-compare)}”

[;; global arguments

{:keys [batch-size context]}

;per-epoch arguments

{:keys [new-network old-network test-ds]} ]

(let [batch-size (long batch-size)

test-results (execute/run new-network test-ds

:batch-size batch-size

:loss-outputs? true

:context context)

;;; test metrics

test-actual (mapv #(vec->label [0.0 1.0] %) (map :label test-ds))

test-pred (mapv #(vec->label [0.0 1.0] % [1 0.9]) (map :label test-results))

precision (metrics/precision test-actual test-pred)

recall (metrics/recall test-actual test-pred)

f-beta (f-beta precision recall)

;; if current f-beta higher than the old network’s, current is best network

best-network? (or (nil? (get old-network :cv-score))

(> f-beta (get old-network :cv-score)))

updated-network (assoc new-network :cv-score f-beta)

epoch (get new-network :epoch-count)]

(experiment-train/save-network updated-network network-file)

(log (str “Epoch: ” epoch “\n”

“Precision: ” precision  “\n”

“Recall: ” recall “\n”

“F1: ” f-beta “\n\n”))

{:best-network? best-network?

:network updated-network}))

The function runs the current network on the test set, calculates the F1 score, and updates/saves the network accordingly. It also prints out our evaluation metrics at each epoch. If we run (train) in the REPL now, we get a high score that something that looks like this:

Epoch: 30
Precision: 0.2515923566878981
Recall: 0.9186046511627907
F1: 0.395

Haha. That’s pretty embarrassingly bad.

Data Augmentation

Here’s the problem. Remember how I said our highly imbalanced dataset was going to cause issues for us later? The model currently does not have enough positive examples to learn from. When we call experiment-util/infinite-class-balanced-dataset in our train function, we’re actually creating hundreds of copies of each positive training instance to balance out the dataset. As a result, the model is effectively memorizing those feature values and not actually learning the distinction between the classes.

One way around this problem is through data augmentation, in which we generate additional, artificial data based on the examples we already have. In order to create realistic positive training examples, we are going to add random amounts of noise to the feature vectors of each of our existing positive examples. The amount of noise we add will be dependent on the variance of each feature across the positive class, such that features with a large variance will be augmented with a large amount of noise, and vice versa for features with small variances.

Here is our code for data augmentation:

(defonce get-scaled-variances

(memoize

(fn []

(let [{positives true negatives false} (group-by #(= (:label %) [0.0 1.0]) (create-dataset))

pos-data (mat/matrix (map #(:data %) positives))

variances (mat/matrix (map #(matstats/variance %) (mat/columns pos-data)))

scaled-vars (mat/mul (/ 5000 (mat/length variances)) variances)]

scaled-vars))))

(defn add-rand-variance

“Given vector v, add random vector based off the variance of each feature”

[v scaled-vars]

(let [randv (map #(- (* 2 (rand %)) %) scaled-vars)]

(mapv + v randv)))

(defn augment-train-ds

“Takes train dataset and augments positive examples to reach 50/50 balance”

[orig-train]

(let [{train-pos true train-neg false} (group-by #(= (:label %) [0.0 1.0]) orig-train)

pos-data (map #(:data %) train-pos)

num-augments (- (count train-neg) (count train-pos))

augments-per-sample (int (/ num-augments (count train-pos)))

augmented-data (apply concat (repeatedly augments-per-sample

#(mapv (fn [p] (add-rand-variance p (get-scaled-variances))) pos-data)))

augmented-ds (mapv (fn [d] {:data d :label [0 1]}) augmented-data)]

(shuffle (concat orig-train augmented-ds))))

augment-train-ds takes our original train dataset, calculates the number of augmentations that have to be made to reach a 50/50 class balance, and applies those augmentations to our existing samples by adding a random noise vector (add-rand-variance) based on the allowed variance (get-scaled-variances). In the end, we concatenate the augmented examples back in to the original dataset and return the balanced dataset.

During training, the model will be seeing an unrealistically large amount of positive examples, while the test set will still be only 0.172% positives. As a result, while the model may be able to learn the differences between the two classes better, it will over-predict positive examples during testing. In order to fix this, we can require a higher threshold of certainty to predict “positive” during testing. In other words, instead of requiring the model to be at least 50% certain that an example is positive in order to classify it as such, we can require it to be at least 70% certain. After some testing, I found the optimal value to be set at 90%. The code for this can be found in the vec->label function in the source code, and is called on line 31 of the f1-test-fn.

Using the new, augmented dataset for training, our high scores look something like this:

Epoch: 25
Precision: 0.8658536585365854
Recall: 0.8255813953488372
F1: 0.8452380952380953

Much better!

Conclusion

As always, the model can still be improved. Here are a few ideas for next steps:

  • Are all the PCA features informative? Take a look at the distribution of values for positive and negative examples across the features, and drop any features that do not help distinguish between the two classes.
  • Are there other neural net architectures, activation functions, etc. that perform better?
  • Are there different data augmentation techniques that would perform better?
  • How does model performance in Cortex compare to Keras/Tensorflow/Theano/Caffe?

The source code for the project can be found in its entirety here. I encourage you to try some of these next steps, test out new datasets, and explore different network architectures (we have a great image classification example for reference on conv nets). Cortex is pushing towards its 1.0 release, so if you have any thoughts, recommendations, or feedback, be sure to let us know. Happy hacking!

 

Deep Learning for Object Detection: A Comprehensive Review

Author Joyce Xu

 

 

 

1

 

With the rise of autonomous vehicles, smart video surveillance, facial detection and various people counting applications, fast and accurate object detection systems are rising in demand. These systems involve not only recognizing and classifying every object in an image, but localizing each one by drawing the appropriate bounding box around it. This makes object detection a significantly harder task than its traditional computer vision predecessor, image classification.

Fortunately, however, the most successful approaches to object detection are currently extensions of image classification models. A few months ago, Google released a new object detection API for Tensorflow. With this release came the pre-built architectures and weights for a few specific models:

In my last blog post, I covered the intuition behind the three base network architectures listed above: MobileNets, Inception, and ResNet. This time around, I want to do the same for Tensorflow’s object detection models: Faster R-CNN, R-FCN, and SSD. By the end of this post, we will hopefully have gained an understanding of how deep learning is applied to object detection, and how these object detection models both inspire and diverge from one another.

Faster R-CNN

Faster R-CNN is now a canonical model for deep learning-based object detection. It helped inspire many detection and segmentation models that came after it, including the two others we’re going to examine today. Unfortunately, we can’t really begin to understand Faster R-CNN without understanding its own predecessors, R-CNN and Fast R-CNN, so let’s take a quick dive into its ancestry.

R-CNN

R-CNN is the grand-daddy of Faster R-CNN. In other words, R-CNN reallykicked things off.

R-CNN, or Region-based Convolutional Neural Network, consisted of 3 simple steps:

  1. Scan the input image for possible objects using an algorithm called Selective Search, generating ~2000 region proposals
  2. Run a convolutional neural net (CNN) on top of each of these region proposals
  3. Take the output of each CNNand feed it into a) an SVM to classify the region and b) a linear regressor to tighten the bounding box of the object, if such an object exists.

These 3 steps are illustrated in the image below:

2

2

In other words, we first propose regions, then extract features, and then classify those regions based on their features. In essence, we have turned object detection into an image classification problem. R-CNN was very intuitive, but very slow.

Fast R-CNN

R-CNN’s immediate descendant was Fast-R-CNN. Fast R-CNN resembled the original in many ways, but improved on its detection speed through two main augmentations:

  1. Performing feature extraction over the image beforeproposing regions, thus only running one CNN over the entire image instead of 2000 CNN’s over 2000 overlapping regions
  2. Replacing the SVM with a softmax layer, thus extending the neural network for predictions instead of creating a new model

The new model looked something like this:

 

3 1

As we can see from the image, we are now generating region proposals based on the last feature map of the network, not from the original image itself. As a result, we can train just one CNN for the entire image.

In addition, instead of training many different SVM’s to classify each object class, there is a single softmax layer that outputs the class probabilities directly. Now we only have one neural net to train, as opposed to one neural net and many SVM’s.

Fast R-CNN performed much better in terms of speed. There was just one big bottleneck remaining: the selective search algorithm for generating region proposals.

Faster R-CNN

At this point, we’re back to our original target: Faster R-CNN. The main insight of Faster R-CNN was to replace the slow selective search algorithm with a fast neural net. Specifically, it introduced the region proposal network (RPN).

Here’s how the RPN worked:

  • At the last layer of an initial CNN, a 3×3 sliding window moves across the feature map and maps it to a lower dimension (e.g. 256-d)
  • For each sliding-window location, it generates multiplepossible regions based on k fixed-ratio anchor boxes(default bounding boxes)
  • Each region proposal consists of a) an “objectness” score for that region and b) 4 coordinates representing the bounding box of the region

In other words, we look at each location in our last feature map and consider kdifferent boxes centered around it: a tall box, a wide box, a large box, etc. For each of those boxes, we output whether or not we think it contains an object, and what the coordinates for that box are. This is what it looks like at one sliding window location:

4 1

4 1

The 2k scores represent the softmax probability of each of the k bounding boxes being on “object.” Notice that although the RPN outputs bounding box coordinates, it does not try to classify any potential objects: its sole job is still proposing object regions. If an anchor box has an “objectness” score above a certain threshold, that box’s coordinates get passed forward as a region proposal.

Once we have our region proposals, we feed them straight into what is essentially a Fast R-CNN. We add a pooling layer, some fully-connected layers, and finally a softmax classification layer and bounding box regressor. In a sense, Faster R-CNN = RPN + Fast R-CNN.

 

5 1

5 1

Altogether, Faster R-CNN achieved much better speeds and a state-of-the-art accuracy. It is worth noting that although future models did a lot to increase detection speeds, few models managed to outperform Faster R-CNN by a significant margin. In other words, Faster R-CNN may not be the simplest or fastest method for object detection, but it is still one of the best performing. Case in point, Tensorflow’s Faster R-CNN with Inception ResNet is their slowest but most accurate model.

At the end of the day, Faster R-CNN may look complicated, but its core design is the same as the original R-CNN: hypothesize object regions and then classify them. This is now the predominant pipeline for many object detection models, including our next one.

R-FCN

Remember how Fast R-CNN improved on the original’s detection speed by sharing a single CNN computation across all region proposals? That kind of thinking was also the motivation behind R-FCN: increase speed by maximizing shared computation.

R-FCN, or Region-based FullyConvolutional Net, shares 100% of the computations across every single output. Being fully convolutional, it ran into a unique problem in model design.

On the one hand, when performing classification of an object, we want to learn location invariance in a model: regardless of where the cat appears in the image, we want to classify it as a cat. On the other hand, when performing detection of the object, we want to learn location variance: if the cat is in the top left-hand corner, we want to draw a box in the top left-hand corner. So if we’re trying to share convolutional computations across 100% of the net, how do we compromise between location invariance and location variance?

R-FCN’s solution: position-sensitive score maps.

Each position-sensitive score map represents one relative position of one object class. For example, one score map might activate wherever it detects the top-right of a cat. Another score map might activate where it sees the bottom-left of a car. You get the point. Essentially, these score maps are convolutional feature maps that have been trained to recognize certain parts of each object.

Now, R-FCN works as follows:

  1. Run a CNN (in this case, ResNet) over the input image
  2. Add a fully convolutional layer to generate a score bankof the aforementioned “position-sensitive score maps.” There should be k²(C+1) score maps, with k² representing the number of relative positions to divide an object (e.g. 3² for a 3 by 3 grid) and C+1 representing the number of classes plus the background.
  3. Run a fully convolutional region proposal network (RPN) to generate regions of interest (RoI’s)
  4. For each RoI, divide it into the same k² “bins” or subregions as the score maps
  5. For each bin, check the score bank to see if that bin matches the corresponding position of some object. For example, if I’m on the “upper-left” bin, I will grab the score maps that correspond to the “upper-left” corner of an object and average those values in the RoI region. This process is repeated for each class.
  6. Once each of the k² bins has an “object match” value for each class, average the bins to get a single score per class.
  7. Classify the RoI with a softmax over the remaining C+1 dimensional vector

Altogether, R-FCN looks something like this, with an RPN generating the RoI’s:

6 1

6 1

Even with the explanation and the image, you might still be a little confused on how this model works. Honestly, R-FCN is much easier to understand when you can visualize what it’s doing. Here is one such example of an R-FCN in practice, detecting a baby:

 

 

 

 

7

Simply put, R-FCN considers each region proposal, divides it up into sub-regions, and iterates over the sub-regions asking: “does this look like the top-left of a baby?”, “does this look like the top-center of a baby?” “does this look like the top-right of a baby?”, etc. It repeats this for all possible classes. If enough of the sub-regions say “yes, I match up with that part of a baby!”, the RoI gets classified as a baby after a softmax over all the classes.

With this setup, R-FCN is able to simultaneously address location variance by proposing different object regions, and location invariance by having each region proposal refer back to the same bank of score maps. These score maps should learn to classify a cat as a cat, regardless of where the cat appears. Best of all, it is fully convolutional, meaning all of the computation is shared throughout the network.

As a result, R-FCN is several times faster than Faster R-CNN, and achieves comparable accuracy.

SSD

Our final model is SSD, which stands for Single-Shot Detector. Like R-FCN, it provides enormous speed gains over Faster R-CNN, but does so in a markedly different manner.

Our first two models performed region proposals and region classifications in two separate steps. First, they used a region proposal network to generate regions of interest; next, they used either fully-connected layers or position-sensitive convolutional layers to classify those regions. SSD does the two in a “single shot,” simultaneously predicting the bounding box and the class as it processes the image.

Concretely, given an input image and a set of ground truth labels, SSD does the following:

  1. Pass the image through a series of convolutional layers, yielding several sets of feature maps at different scales (e.g. 10×10, then 6×6, then 3×3, etc.)
  2. For each location in eachof these feature maps, use a 3×3 convolutional filter to evaluate a small set of default bounding boxes. These default bounding boxes are essentially equivalent to Faster R-CNN’s anchor boxes.
  3. For each box, simultaneously predict a) the bounding box offset and b) the class probabilities
  4. During training, match the ground truth box with these predicted boxes based on IoU. The best predicted box will be labeled a “positive,” along with all other boxes that have an IoU with the truth >0.5.

SSD sounds straightforward, but training it has a unique challenge. With the previous two models, the region proposal network ensured that everything we tried to classify had some minimum probability of being an “object.” With SSD, however, we skip that filtering step. We classify and draw bounding boxes from every single position in the image, using multiple different shapes, at several different scales. As a result, we generate a much greater number of bounding boxes than the other models, and nearly all of the them are negative examples.

To fix this imbalance, SSD does two things. Firstly, it uses non-maximum suppression to group together highly-overlapping boxes into a single box. In other words, if four boxes of similar shapes, sizes, etc. contain the same dog, NMS would keep the one with the highest confidence and discard the rest. Secondly, the model uses a technique called hard negative mining to balance classes during training. In hard negative mining, only a subset of the negative examples with the highest training loss (i.e. false positives) are used at each iteration of training. SSD keeps a 3:1 ratio of negatives to positives.

Its architecture looks like this:

 

8

As I mentioned above, there are “extra feature layers” at the end that scale down in size. These varying-size feature maps help capture objects of different sizes. For example, here is SSD in action

9

In smaller feature maps (e.g. 4×4), each cell covers a larger region of the image, enabling them to detect larger objects. Region proposal and classification are performed simultaneously: given p object classes, each bounding box is associated with a (4+p)-dimensional vector that outputs 4 box offset coordinates and pclass probabilities. In the last step, softmax is again used to classify the object.

Ultimately, SSD is not so different from the first two models. It simply skips the “region proposal” step, instead considering every single bounding box in every location of the image simultaneously with its classification. Because SSD does everything in one shot, it is the fastest of the three models, and still performs quite comparably.

Conclusion

Faster R-CNN, R-FCN, and SSD are three of the best and most widely used object detection models out there right now. Other popular models tend to be fairly similar to these three, all relying on deep CNN’s (read: ResNet, Inception, etc.) to do the initial heavy lifting and largely following the same proposal/classification pipeline.

At this point, putting these models to use just requires knowing Tensorflow’s API. Tensorflow has a starter tutorial on using these models here. Give it a try, and happy hacking!

 

 

An intuitive guide to deep network architectures

Author: Joyce Xu

 

 

1

GoogLeNet, 2014

Over the past few years, much of the progress in deep learning for computer vision can be boiled down to just a handful of neural network architectures. Setting aside all the math, the code, and the implementation details, I wanted to explore one simple question: how and why do these models work?

At the time of writing, Keras ships with six of these pre-trained models already built into the library:

  • VGG16
  • VGG19
  • ResNet50
  • Inception v3
  • Xception
  • MobileNet

The VGG networks, along with the earlier AlexNet from 2012, follow the now archetypal layout of basic conv nets: a series of convolutional, max-pooling, and activation layers before some fully-connected classification layers at the end. MobileNet is essentially a streamlined version of the Xception architecture optimized for mobile applications. The remaining three, however, truly redefine the way we look at neural networks.

This rest of this post will focus on the intuition behind the ResNet, Inception, and Xception architectures, and why they have become building blocks for so many subsequent works in computer vision.

ResNet

ResNet was born from a beautifully simple observation: why do very deep nets perform worse as you keep adding layers?

Intuitively, deeper nets should perform no worse than their shallower counterparts, at least at train time (when there is no risk of overfitting). As a thought experiment, let’s say we’ve built a net with n layers that achieves a certain accuracy. At minimum, a net with n+1layers should be able to achieve the exact same accuracy, if only by copying over the same first n layers and performing an identity mapping for the last layer. Similarly, nets of n+2n+3, and n+4layers could all continue performing identity mappings and achieve the same accuracy. In practice, however, these deeper nets almost always degrade in performance.

The authors of ResNet boiled these problems down to a single hypothesis: direct mappings are hard to learn. And they proposed a fix: instead of trying to learn an underlying mapping from x to H(x), learn the difference between the two, or the “residual.” Then, to calculate H(x), we can just add the residual to the input.

Say the residual is F(x)=H(x)-x. Now, instead of trying to learn H(x) directly, our nets are trying to learn F(x)+x.

This gives rise to the famous ResNet (or “residual network”) block you’ve probably seen:

 

2.jpg

ResNet block

 

Each “block” in ResNet consists of a series of layers and a “shortcut” connection adding the input of the block to its output. The “add” operation is performed element-wise, and if the input and output are of different sizes, zero-padding or projections (via 1×1 convolutions) can be used to create matching dimensions.

If we go back to our thought experiment, this simplifies our construction of identity layers greatly. Intuitively, it’s much easier to learn to push F(x) to 0 and leave the output as x than to learn an identity transformation from scratch. In general, ResNet gives layers a “reference” point — x — to start learning from.

This idea works astoundingly well in practice. Previously, deep neural nets often suffered from the problem of vanishing gradients, in which gradient signals from the error function decreased exponentially as they backpropogated to earlier layers. In essence, by the time the error signals traveled all the way back to the early layers, they were so small that the net couldn’t learn. However, because the gradient signal in ResNets could travel back directly to early layers via shortcut connections, we could suddenly build 50-layer, 101-layer, 152-layer, and even (apparently) 1000+ layer nets that still performed well. At the time, this was a huge leap forward from the previous state-of-the-art, which won the ILSVRC 2014 challenge with 22 layers.

ResNet is one of my personal favorite developments in the neural network world. So many deep learning papers come out with minor improvements from hacking away at the math, the optimizations, and the training process without thought to the underlying task of the model. ResNet fundamentally changed the way we understand neural networks and how they learn.

Fun facts:

  • The 1000+ layer net is open-source! I would not reallyrecommend you try re-training it, but…
  • If you’re feeling functional and a little frisky, I recently ported ResNet50 to the open-source Clojure ML library Cortex. Try it out and see how it compares to Keras!

Inception

If ResNet was all about going deeper, the Inception Family™ is all about going wider. In particular, the authors of Inception were interested in the computational efficiency of training larger nets. In other words: how can we scale up neural nets without increasing computational cost?

The original paper focused on a new building block for deep nets, a block now known as the “Inception module.” At its core, this module is the product of two key insights.

The first insight relates to layer operations. In a traditional conv net, each layer extracts information from the previous layer in order to transform the input data into a more useful representation. However, each layer type extracts a different kind of information. The output of a 5×5 convolutional kernel tells us something different from the output of a 3×3 convolutional kernel, which tells us something different from the output of a max-pooling kernel, and so on and so on. At any given layer, how do we know what transformation provides the most “useful” information?

Insight #1: why not let the model choose?

An Inception module computes multiple different transformations over the same input map in parallel, concatenating their results into a single output. In other words, for each layer, Inception does a 5×5 convolutional transformation, and a 3×3, and a max-pool. And the next layer of the model gets to decide if (and how) to use each piece of information.

3The increased information density of this model architecture comes with one glaring problem: we’ve drastically increased computational costs. Not only are large (e.g. 5×5) convolutional filters inherently expensive to compute, stacking multiple different filters side by side greatly increases the number of feature maps per layer. And this increase becomes a deadly bottleneck in our model.

Think about it this way. For each additional filter added, we have to convolve over all the input maps to calculate a single output. See the image below: creating one output map from a single filter involves computing over every single map from the previous layer.

4

Let’s say there are M input maps. One additional filter means convolving over Mmore maps; N additional filters means convolving over N*M more maps. In other words, as the authors note, “any uniform increase in the number of [filters] results in a quadratic increase of computation.” Our naive Inception module just tripled or quadrupled the number of filters. Computationally speaking, this is a Big Bad Thing.

This leads to insight #2: using 1×1 convolutions to perform dimensionality reduction. In order to solve the computational bottleneck, the authors of Inception used 1×1 convolutions to “filter” the depth of the outputs. A 1×1 convolution only looks at one value at a time, but across multiple channels, it can extract spatial information and compress it down to a lower dimension. For example, using 20 1×1 filters, an input of size 64x64x100 (with 100 feature maps) can be compressed down to 64x64x20. By reducing the number of input maps, the authors of Inception were able to stack different layer transformations in parallel, resulting in nets that were simultaneously deep (many layers) and “wide” (many parallel operations).

5How well did this work? The first version of Inception, dubbed “GoogLeNet,” was the 22-layer winner of the ILSVRC 2014 competition I mentioned earlier. Inception v2 and v3 were developed in a second paper a year later, and improved on the original in several ways — most notably by refactoring larger convolutions into consecutive smaller ones that were easier to learn. In v3, for example, the 5×5 convolution was replaced with 2 consecutive 3×3 convolutions.

Inception rapidly became a defining model architecture. The latest version of Inception, v4, even threw in residual connections within each module, creating an Inception-ResNet hybrid. Most importantly, however, Inception demonstrated the power of well-designed “network-in-network” architectures, adding yet another step to the representational power of neural networks.

Fun facts:

  • The original Inception paper literally cites the “we need to go deeper” internet meme as an inspiration for its name. This must be the first time knowyourmeme.com got listed as the first reference of a Google paper.
  • The second Inception paper (with v2 and v3) was released just one day after the original ResNet paper. December 2015 was a good time for deep learning.

Xception

Xception stands for “extreme inception.” Rather like our previous two architectures, it reframes the way we look at neural nets — conv nets in particular. And, as the name suggests, it takes the principles of Inception to an extreme.

Here’s the hypothesis: “cross-channel correlations and spatial correlations are sufficiently decoupled that it is preferable not to map them jointly.”

What does this mean? Well, in a traditional conv net, convolutional layers seek out correlations across both spaceand depth. Let’s take another look at our standard convolutional layer:

4

In the image above, the filter simultaneously considers a spatial dimension (each 2×2 colored square) and a cross-channel or “depth” dimension (the stack of four squares). At the input layer of an image, this is equivalent to a convolutional filter looking at a 2×2 patch of pixels across all three RGB channels. Here’s the question: is there any reason we need to consider both the image region and the channels at the same time?

In Inception, we began separating the two slightly. We used 1×1 convolutions to project the original input into several separate, smaller input spaces, and from each of those input spaces we used a different type of filter to transform those smaller 3D blocks of data. Xception takes this one step further. Instead of partitioning input data into several compressed chunks, it maps the spatial correlations for each output channel separately, and then performs a 1×1 depthwise convolution to capture cross-channel correlation.

 

6The author notes that this is essentially equivalent to an existing operation known as a “depthwise separable convolution,” which consists of a depthwise convolution (a spatial convolution performed independently for each channel) followed by a pointwise convolution (a 1×1 convolution across channels). We can think of this as looking for correlations across a 2D space first, followed by looking for correlations across a 1D space. Intuitively, this 2D + 1D mapping is easier to learn than a full 3D mapping.

And it works! Xception slightly outperforms Inception v3 on the ImageNet dataset, and vastly outperforms it on a larger image classification dataset with 17,000 classes. Most importantly, it has the same number of model parameters as Inception, implying a greater computational efficiency. Xception is much newer (it came out in April 2017), but as mentioned above, its architecture is already powering Google’s mobile vision applications through MobileNet.

Fun facts:

  • The author of Xception is also the author of Keras. Francois Chollet is a living god.

Moving forward

That’s it for ResNet, Inception, and Xception! I firmly believe in having a strong intuitive understanding of these networks, because they are becoming ubiquitous in research and industry alike. We can even use them in our own applications with something called transfer learning.

Transfer learning is a technique in machine learning in which we apply knowledge from a source domain (e.g. ImageNet) to a target domain that may have significantly fewer data points. In practice, this generally involves initializing a model with pre-trained weights from ResNet, Inception, etc. and either using it as a feature extractor, or fine-tuning the last few layers on a new dataset. With transfer learning, these models can be re-purposed for any related task we want, from object detection for self-driving vehicles to generating captions for video clips.

To get started with transfer learning, Keras has a wonderful guide to fine-tuning models here. If it sounds interesting to you, check it out — and happy hacking!

 

Introduction to Autoencoders

Author: Nathan Hubens

Linkedind: https://www.linkedin.com/in/nathan-hubens

Autoencoders (AE) are neural networks that aims to copy their inputs to their outputs. They work by compressing the input into a latent-spacerepresentation, and then reconstructing the output from this representation. This kind of network is composed of two parts :

  1. Encoder: This is the part of the network that compresses the input into a latent-space representation. It can be represented by an encoding function h=f(x).
  2. Decoder: This part aims to reconstruct the input from the latent space representation. It can be represented by a decoding function r=g(h).
1 3

1 3

Architecture of an Autoencoder

The autoencoder as a whole can thus be described by the function g(f(x)) = rwhere you want r as close as the original input x.

Why copying the input to the output ?

If the only purpose of autoencoders was to copy the input to the output, they would be useless. Indeed, we hope that, by training the autoencoder to copy the input to the output, the latent representation will take on useful properties.

This can be achieved by creating constraints on the copying task. One way to obtain useful features from the autoencoder is to constrain h to have smaller dimensions than x, in this case the autoencoder is called undercomplete. By training an undercomplete representation, we force the autoencoder to learn the most salient features of the training data. If the autoencoder is given too much capacity, it can learn to perform the copying task without extracting any useful information about the distribution of the data. This can also occur if the dimension of the latent representation is the same as the input, and in the overcomplete case, where the dimension of the latent representation is greater than the input. In these cases, even a linear encoder and linear decoder can learn to copy the input to the output without learning anything useful about the data distribution. Ideally, one could train any architecture of autoencoder successfully, choosing the code dimension and the capacity of the encoder and decoder based on the complexity of distribution to be modeled.


What are autoencoders used for ?

Today data denoising and dimensionality reduction for data visualization are considered as two main interesting practical applications of autoencoders. With appropriate dimensionality and sparsity constraints, autoencoders can learn data projections that are more interesting than PCA or other basic techniques.

Autoencoders are learned automatically from data examples. It means that it is easy to train specialized instances of the algorithm that will perform well on a specific type of input and that it does not require any new engineering, only the appropriate training data.

However, autoencoders will do a poor job for image compression. As the autoencoder is trained on a given set of data, it will achieve reasonable compression results on data similar to the training set used but will be poor general-purpose image compressors. Compression techniques like JPEG will do vastly better.

Autoencoders are trained to preserve as much information as possible when an input is run through the encoder and then the decoder, but are also trained to make the new representation have various nice properties. Different kinds of autoencoders aim to achieve different kinds of properties. We will focus on four types on autoencoders.


Types of autoencoder :

In this article, the four following types of autoencoders will be described:

  1. Vanilla autoencoder
  2. Multilayer autoencoder
  3. Convolutional autoencoder
  4. Regularized autoencoder

In order to illustrate the different types of autoencoder, an example of each has been created, using the Keras framework and the MNIST dataset. The code for each type of autoencoder is available on my GitHub.

Vanilla autoencoder

In its simplest form, the autoencoder is a three layers net, i.e. a neural net with one hidden layer. The input and output are the same, and we learn how to reconstruct the input, for example using the adam optimizer and the mean squared error loss function.

 

input_size = 784
hidden_size = 64
output_size = 784

x = Input(shape=(input_size,))

# Encoder
h = Dense(hidden_size, activation='relu')(x)

# Decoder
r = Dense(output_size, activation='sigmoid')(h)

autoencoder = Model(input=x, output=r)
autoencoder.compile(optimizer='adam', loss='mse')

Here, we see that we have an undercomplete autoencoder as the hidden layer dimension (64) is smaller than the input (784). This constraint will impose our neural net to learn a compressed representation of data.

 


Multilayer autoencoder

If one hidden layer is not enough, we can obviously extend the autoencoder to more hidden layers.

 

input_size = 784
hidden_size = 128
code_size = 64

x = Input(shape=(input_size,))

# Encoder
hidden_1 = Dense(hidden_size, activation='relu')(x)
h = Dense(code_size, activation='relu')(hidden_1)

# Decoder
hidden_2 = Dense(hidden_size, activation='relu')(h)
r = Dense(input_size, activation='sigmoid')(hidden_2)

autoencoder = Model(input=x, output=r)
autoencoder.compile(optimizer='adam', loss='mse')

Now our implementation uses 3 hidden layers instead of just one. Any of the hidden layers can be picked as the feature representation but we will make the network symmetrical and use the middle-most layer.

 


Convolutional autoencoder

We may also ask ourselves: can autoencoders be used with Convolutions instead of Fully-connected layers ?

The answer is yes and the principle is the same, but using images (3D vectors) instead of flattened 1D vectors. The input image is downsampled to give a latent representation of smaller dimensions and force the autoencoder to learn a compressed version of the images.

 

x = Input(shape=(28, 28,1)) 

# Encoder
conv1_1 = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
pool1 = MaxPooling2D((2, 2), padding='same')(conv1_1)
conv1_2 = Conv2D(8, (3, 3), activation='relu', padding='same')(pool1)
pool2 = MaxPooling2D((2, 2), padding='same')(conv1_2)
conv1_3 = Conv2D(8, (3, 3), activation='relu', padding='same')(pool2)
h = MaxPooling2D((2, 2), padding='same')(conv1_3)


# Decoder
conv2_1 = Conv2D(8, (3, 3), activation='relu', padding='same')(h)
up1 = UpSampling2D((2, 2))(conv2_1)
conv2_2 = Conv2D(8, (3, 3), activation='relu', padding='same')(up1)
up2 = UpSampling2D((2, 2))(conv2_2)
conv2_3 = Conv2D(16, (3, 3), activation='relu')(up2)
up3 = UpSampling2D((2, 2))(conv2_3)
r = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(up3)

autoencoder = Model(input=x, output=r)
autoencoder.compile(optimizer='adam', loss='mse')

Regularized autoencoder

There are other ways we can constraint the reconstruction of an autoencoder than to impose a hidden layer of smaller dimension than the input. Rather than limiting the model capacity by keeping the encoder and decoder shallow and the code size small, regularized autoencoders use a loss function that encourages the model to have other properties besides the ability to copy its input to its output. In practice, we usually find two types of regularized autoencoder: the sparse autoencoder and the denoising autoencoder.

Sparse autoencoder : Sparse autoencoders are typically used to learn features for another task such as classification. An autoencoder that has been regularized to be sparse must respond to unique statistical features of the dataset it has been trained on, rather than simply acting as an identity function. In this way, training to perform the copying task with a sparsity penalty can yield a model that has learned useful features as a byproduct.

Another way we can constraint the reconstruction of autoencoder is to impose a constraint in its loss. We could, for example, add a reguralization term in the loss function. Doing this will make our autoencoder learn sparse representation of data.

 

input_size = 784
hidden_size = 64
output_size = 784

x = Input(shape=(input_size,))

# Encoder
h = Dense(hidden_size, activation='relu', activity_regularizer=regularizers.l1(10e-5))(x)

# Decoder
r = Dense(output_size, activation='sigmoid')(h)

autoencoder = Model(input=x, output=r)
autoencoder.compile(optimizer='adam', loss='mse')

Notice in our hidden layer, we added an l1 activity regularizer, that will apply a penalty to the loss function during the optimization phase. As a result, the representation is now sparser compared to the vanilla autoencoder.

 

Denoising autoencoder : Rather than adding a penalty to the loss function, we can obtain an autoencoder that learns something useful by changing the reconstruction error term of the loss function. This can be done by adding some noise of the input image and make the autoencoder learn to remove it. By this means, the encoder will extract the most important features and learn a robuster representation of the data.

x = Input(shape=(28, 28, 1))

# Encoder
conv1_1 = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
pool1 = MaxPooling2D((2, 2), padding='same')(conv1_1)
conv1_2 = Conv2D(32, (3, 3), activation='relu', padding='same')(pool1)
h = MaxPooling2D((2, 2), padding='same')(conv1_2)


# Decoder
conv2_1 = Conv2D(32, (3, 3), activation='relu', padding='same')(h)
up1 = UpSampling2D((2, 2))(conv2_1)
conv2_2 = Conv2D(32, (3, 3), activation='relu', padding='same')(up1)
up2 = UpSampling2D((2, 2))(conv2_2)
r = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(up2)

autoencoder = Model(input=x, output=r)
autoencoder.compile(optimizer='adam', loss='mse')

Summary

In this article, we went through the basic architecture of autoencoders. We also looked at many different types of autoencoders: vanilla, multilayer, convolutional and regularized. Each has different properties depending on the imposed constraints : either the reduced dimension of the hidden layers or another kind of penalty.

Utilizar el Deep Learning para mejorar la gráfica de FIFA 18

Author: Chintan Trivedi

 

 

1

 

Comparison of Cristiano Ronaldo’s face, with the left one from FIFA 18 and the right one generated by a Deep Neural Network.

Game Studios spend millions of dollars and thousands of development hours designing game graphics in trying to make them look as close to reality as possible. While the graphics have looked amazingly realistic in the last few years, it is still easy to distinguish them from the real world. However, with the massive advancements made in the field of image processing using Deep Neural Networks, is it time we can leverage that to improve the graphics while simultaneously also reducing the efforts required to create them?

Let us try to answer that using the game FIFA 18…

Football (i.e. soccer) being my favorite sport, FIFA becomes the natural game of choice for all of my deep learning experiments. To find out whether the recent developments in deep learning can help me answer my question, I tried to focus on improving the player faces in FIFA using the (in?)famous deepfakesalgorithm. It is a Deep Neural Network that can be trained to learn and generate extremely realistic human faces. My focus in this project lies on recreating the player faces from within the game and improving them to make them look exactly like the actual players.

Note: Here is a great explanation of how the deepfakes algorithm works. Tl;dr version: it can swap the face of anyone in a video with anybody else’s face using Autoencoders and Convolutional Neural Networks.

Gathering training data

Unlike the game developers, I could collect all required data from Google search without having to trouble Ronaldo with any motion-capture fancy dress.

Let us start by looking at one of the best designed faces in FIFA 18, that of Cristiano Ronaldo, and see if we can improve it. To gather the data required for the deepfakes algorithm, I simply recorded the player’s face from the instant replay option in the game. Now, we want to replace this face with the actual face of Ronaldo. For this, I downloaded a bunch of images from Google such that the images clearly show his face from different angles. That’s all that is needed to get us started with the training process of our model.

Model architecture & Training

The deepfakes algorithm involves training of deep neural networks called autoencoders. These networks are used for unsupervised learning and have an encoder that can encode an input to a compact representation called the “encoding”, and a decoder that can use this encoding to reconstruct the original input. This architecture forces the network to learn the underlying distribution of the input rather than simply parroting back the input. For images as our input, we use a convolutional net as our encoder and a deconvolutional net as our decoder. This architecture is trained to minimize the reconstruction error for unsupervised learning.

For our case, we train two autoencoder networks simultaneously. One network learns to recreate face of Ronaldo from FIFA 18 graphics. The other network learns to recreate the face from actual pictures of Ronaldo. In deepfakes, both networks share the same encoder but are trained with different decoders. Thus, we now have two networks that have learnt how Ronaldo looks like in the game and in real life.

2.jpg 5

2.jpg 5

  1. First autoencoder network learning from FIFA graphics

3 2

Second autoencoder network from learning actual pictures

When training using a pre-trained model on other faces, the total loss goes down from around 0.06 to 0.02 within 4 hours on a GTX 1070. In my case, I continued training on top of the original CageNet model that has been trained to generate Nicolas Cage’s face.

Using the trained models to swap faces

Now comes the fun part. The algorithm is able to swap faces by adopting a clever trick. The second autoencoder network is actually fed with the input of the first one. This way, the shared encoder is able to get the encoding from FIFA face, but the decoder reconstructs the real face using this encoding. Voila, this setup just converted the face from FIFA to the actual face of Ronaldo.

4 2

The second network converting FIFA face to real face of Ronaldo

Results

The GIF below shows a quick preview of results from running this algorithm on faces of other players. I think the improvement is astonishing, but maybe I am biased, so you be the judge.

 

5 1

 

6What if you could play “The Journey” mode of the game as yourself instead of playing as Alex Hunter? All you got to do is upload a minute long video of yourself and download the trained model in a few hours. There you go, you may now play the entire Journey mode as yourself. Now that’d be some next level of immersive gaming!

Where it excels and where it needs more work

The biggest advantage I feel we get with this approach is the amazing life-like faces and graphics that are hard to distinguish from the real world. All of this can be achieved with only a few hours of training, compared to years taken by game designers with the current approach. This means game publishers can come out with new titles much faster rather than spending decades in development. This also means that the studios can save millions of dollars that could now be put into hiring decent story-writers.

The glaring limitation so far is that these faces have been generated post facto, like CGI in movies, while games requires them to be generated in real time. However, one big difference is that this approach does not require any human intervention for generating results once a model has been trained, and the only thing holding it back is the computation time required in generating the output image. I believe it is not going to be very long before we have light weight, not-too-deep generative models that can run very fast without compromising output quality, just like we now have YOLO and SSD MobileNets for real-time object detection, something that wasn’t possible with previous models like RCNNs.

Conclusion

If someone like me, who has no experience in graphics designing, can come up with improved faces within just a few hours, I truly believe that if game developers were to invest heavily in this direction it could change the face of gaming industry (yes, intended) in the not-too-distant future. Now if only anyone from EA sports was reading this…

 

Building a Deep Neural Network to play FIFA 18

Author: Chintan Trivedi

Linkedin: https://www.linkedin.com/in/chintan-trivedi-78665774/

 

A.I. bots in gaming are usually built by hand-coding a bunch of rules that impart game-intelligence. For the most part, this approach does a fairly good job of making the bot imitate human-like behavior. However, for most games it is still easy to tell apart a bot from an actual human playing. If we want to make these bots behave more human-like, would it help to not build them using hand-coded rules? What if we simply let the bot figure out the game by making it learn from looking at how humans play?

Exploring this would require a game where it is possible to collect such data of humans playing the game ahead of developing the game itself. FIFA is one such game that let me explore this. Being able to play the game and record my in-game actions and decisions allowed me to train an end-to-end Deep Learning based bot without having to hard-code a single rule of the game.

The code for this project along with the trained model can be found here:https://github.com/ChintanTrivedi/DeepGamingAI_FIFA.git

Mechanism for playing the game

The underlying mechanism to build such a bot needs to work without having access to any of the game’s internal code. Good thing then that the premise of this bot says we do not want to look at any such in-game information. A simple screenshot of the game window is all that is needed to feed into the bot’s game engine. It processes this visual information and outputs the action it wants to take which gets communicated to the game using a key-press simulation. Rinse and repeat.

4 1

4 1

Now that we have a framework in place to feed input to the bot and to let its output control the game, we come to the interesting part: learning game intelligence. This is done in two steps by (1) using convolution neural network for understanding the screenshot image and (2) using long short term memory networks to decide appropriate action based on the understanding of the image.

STEP 1: Training Convolution Neural Network (CNN)

CNNs are well known for their ability to detect objects in an image with high accuracy. Add to that fast GPUs and intelligent network architectures and we have a CNN model that can run in real time.

5 2

5 2

For making our bot understand the image it is given as input, I use an extremely light weight and fast CNN called MobileNet. The feature map extracted from this network represents a high level understanding of the image, like where the players and other objects of interest are located on the screen. This feature map is then used with a Single-Shot Multi-Box to detect the players on the pitch along with the ball and the goal.

6

6

STEP 2: Training Long Short Term Memory Networks (LSTM)

 

7

7

Now that we have an understanding of the image, we could go ahead and decide what move we want to make. However, we don’t want to look at just one frame and take action. We’d rather look at a short sequence of these images. This is where LSTMs come into picture as they are well known for being able to model temporal sequences in data. Consecutive frames are used as time steps in our sequence, and a feature map is extracted for each frame using the CNN model. These are then fed into two LSTM networks simultaneously.

The first LSTM performs the task of learning what movement the player needs to make. Thus, it’s a multi-class classification model. The second LSTM gets the same input and has to decide what action to take out of cross, through, pass and shoot: another multi-class classification model. The outputs from these two classification problems are then converted to key presses to control the actions in the game.

These networks have been trained on data collected by manually playing the game and recording the input image and the target key press. One of the few instances where gathering labelled data does not feel like a chore!

Evaluating the bot’s performance

I don’t know what accuracy measure to use in order to judge the bot’s performance, other than to let it just go out there and play the game. Based on only 400 minutes of training, the bot has learned to make runs towards the opponent’s goal, make forward passes and take shots when it detects the goal. In the beginner mode of FIFA 18, it has already scored 4 goals in about 6 games, 1 more than Paul Pogba has in the 17/18 season as of time of writing.

Video clips of the bot playing against the inbuilt bot can be found on my YouTube channel, with the video embedded below.

Conclusion

My initial impressions on this approach of building game bots are certainly positive. With limited training, the bot has already picked up on basic rules of the game: making movements towards the goal and putting the ball in the back of the net. I believe it can get very close to human level performance with many more hours of training data, something that would be easy for the game developer to collect. Moreover, extending the model training to learn from real world footage of matches played would enable the game developers to make the bot’s behavior much more natural and realistic. Now if only anyone from EA sports was reading this…

 

Long-term memory neural network 1 – Introduction

Author: Daniele D’armiento

Cognitive skills, such as prediction, reckoning, ability to answer questions and to undertake actions, all involve retrieval of previously stored information.

The actual challenges in the development of an Artificial Intelligence reside in being able to both store big data in memory and quickly retrieve them.

But it is no news that computers are able to store a huge amount of data (today we estimates that all world data is much more than 1 ZettaByte, that is >> 1021 Bytes), and there is also nothing surprising about the existence of large databases and SQL queries for every need.

 Moreover, the human brain has not a stable memory as the silicon one but thanks to this deficiency, our lack of stability and reliability, we can intuitively process big data and retrieve information. In this way we can overcome the so-called “curse of dimensionality”.

No research has already unveiled the mysteries of the human brain but nowadays Deep Learning breakthrough brings us closer to a finer description of what intelligence is.

It has developed a model (which originates from biological neural networks) that is able to learn different signals encoded in images and sounds, to classify them and to build inner representations, in order to organize big amount of data and quickly recover informations distributed among all nodes of the network (opposite to the old style which stored data in precise memory addresses). This is completely automatic without the need of any sequential instructions or algorithms.

 

In the late 80’s, well before Deep Learning come, computers speed was very low compared to nowadays. That is because low speed requires high processing time. It is obvious but not trivial since nobody would start experiments and simulations which could take such a long time. Thus the key to unveiling that world already existed.

 The learning algorithm is, today and yesterday, BackPropagation which, together with Gradient Descent, allows us to find a better approximation of the network weights, reaching a minimum error relative to training data. It requires many update steps and many sample data from which to learn. All this is quite a massive calculus and a big time consumption.

This is an example of the error hypersurface, in the training’s parameters space: the SGD algorithm searches for the best, minimum error parameters.
 Source: https://thoughtsahead.com/2017/01/27/machine-learning-series-introduction-to-machine-learning-linear-regression-and-gradient-descent/

For this reason, old models had to be lightweight, and couldn’t exploit the big data necessary to obtain better performances, so it was the custom of that period to build shallow architectures with fewer parameters and with one or two neuron layers.

However, a more complex model is necessary to achieve a higher level of abstraction. It is also needed to store data in complex structures able to preserve a huge number of observed characteristics and, at the same time, able to generalize which means to recognize that features in objects never observed before.

So, we need to store more data not as a simple repetition, but rather as the “Eureka!” light bulb which occurs when we comprehend a new unifying scheme that describes many sides of the same entity. Sides that previously seemed uncorrelated.

Left: a “Shallow” neural network, a one hidden layer network.  Right: a “Deep” neural network, with many hidden layers.  Source: https://www.quora.com/What-is-the-difference-between-deep-and-shallow-neural-networks

 

If a Deep Learning model is able to recognize objects in images, to understand words spoken by humans, or answer to written questions in a way that does make sense, then it means that the model can grasp the meaning without retrieving that from a database. It synthesizes a concept by its own the same way we do.

 This is possible thanks to the “Deep” structure which enables us to store more information than the former pre-2010 models.

GoogLeNet: a neural network used by Google for image recogition.
Source: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html

Regarding linguistic models for translation, for NLP and NLU, as well as the conversational one, a big step forward was possible thanks to Deep Learning.

 The performances can be measured but it is straightforward to realize

only by intuition how a certain model gives us a good translation, or a decent chatbot assisting for a service.

When it does good, the cause of this intelligent behaviour is something different from a lookup table or a simple, quick algorithm; it needs a long-term memory to link words and phrases distant in time and semantic space. An n-gram statistics will certantly fail to do that.

A neural network with Episodic memory: Dynamic Memory Network (DMN)

Source: https://yerevann.github.io/2016/02/05/implementing-dynamic-memory-networks/

 

There are many types of neural networks for language models, ranging from Recurrent Neural Network (RNN) to Convolutional Neural Network (CNN).

We are going to show a new promising model: the Dynamic Memory Network (DMN).

This model is trained on elements which consists of several input phrases, a question

and an answer each, and its strength lies in the so-called Episodic Memory which can perform a multi-step process of phrases from which it builds a contest and extracts the required information.

 

References

https://blogs.cisco.com/gov/internet-traffic-continues-to-explode-no-end-in-sight

https://thoughtsahead.com/2017/01/27/machine-learning-series-introduction-to-machine-learning-linear-regression-and-gradient-descent/

https://www.quora.com/What-is-the-difference-between-deep-and-shallow-neural-networks

https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html

https://yerevann.github.io/2016/02/05/implementing-dynamic-memory-networks/

Capsule Neural Networks

Author: Matteo Alberti

 

 

CNN highly perform when our validation set is very close to the training data but if the images have rotation, tilt or commonly any kind of affine transformations than CNNs have poor performance. As we have seen, until now, the main resolution was given adding more layers (due to improve generalization) where innermost layers extract simple features like small curves and edges while higher up layers combine simple features into complex one.

 

To achieve this computation in reasonable time we use max-pooling operator after each convolutional layers. However, this operator loses out the positional data creating a sort of positional invariance

1

 

So the recent advancement of Capsule Networks is to achieve not invariance (small tolerant to changes) but Equivariance (total invariance and accordingly adaptively) replacing max-pooling operator with a new one: dynamic routing.

 

 

 

Overview

A capsule is a group of neurons.  The activity vector of a capsules represents the instantiation parameters when an object (or a part of it) is detected by the network.

The length of this activity vector will represent the probability of existence while the orientation of the vector encodes all Pose-Matrix information (rotation, translation).

 

Pose-Matrix:

Pose Matrix draws inspiration from rendering (computer graphics) that is, the construction of an image starting from a hierarchical representation of geometric data.  In this case the goal is to build a sort of inverse-rendering:

 

The image is deconstructed on the basis of a hierarchical representation matching it with extracted features”

2.jpg

When a capsule at lower level is active makes predictions to higher-level capsule, via transformation matrices. If multiple predictions agree a higher level capsule becomes active. These result is achieve replacing max-pooling with iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.

 

 

 

Architecture

 

The simple Capsule Network architecture is very shallow, only two convolutional layers and one fully-connected.

 

First Convolutional layer has 256 kernels (9×9, stride equal to one) with a ReLu activation. This because we need a spatial reduction and convert pixel intensities to the activities of local features detectors used ad inputs to the primary capsules.

Primary capsules are the lowest level of multidimensional entities and corresponds to the inverse-rendering process.

Second layer of primary capsules is a convolutional one, 32 channels of 8 dimensional capsules where each primary capsules has 8 convolutional units (9×9 kernels with stride equal to 2). Each capsule corresponds to a features. Primary Capsules gives 32x6x6 capsule output of 8D vector where inside the 6×6 grid capsules share their weights.

Final layer, called DigitCaps, produce a 16D capsule, one for each target class (like softmax but neurons are replacing by capsules)

 

So the main differences are given by:

 

  • Replace neurons with capsules (and so scalar with vectors)
  • A new activation function (able to works with vectors)
  • Replace max-pooling with dynamic by routing

 

 

 

Squash Function

 

As we have seen we want that length of an activity vector represents the probability of existence, therefore we use a non-linear function called “squashing” that do a sort of normalization from zero to one.

Where:

Vectors near to zero will be shrunk to almost zero while long one slightly below to one.

 

 keep the orientation of the extracted features

 

5

 

 

Dynamic Routing

 

Dynamic Routing isn’t just a smarter max-pooling operator that preserve spatial information but also give us the opportunity to preserve Hierarchy of the parts, this is an alternative way to do forward pass.

 

Dynamic Routing is an Iterative Process that connect capsules with similar activities.

 

2.jpg 1

In every iteration each capsule compute a matrix multiplication with the output of the capsules keeping only with higher norm and similar orientation.

 

 

 

 

 

So  is given by:  

Where  is a not negative scalar. For each capsule at below level i, the sum of all  is equal to one ( ) and for each capsule the number of weights is equal to the number of capsule at upper level.

 

So with dynamic routing we first connect each capsules at one level with all capsule at upper layer, like FC layer, then due to the iterative process we create a routing between capsules like a sort of intelligent dropout, keeping only the most likely connections.

 

 

 

Training of weights and Loss Function

 

When the forward pass is completed and fixed we are going to training our weights :

We have to neural networks to do this step:

 

  1. First network, during backpropagation step, maximize the norm of the capsule of the target (L^i _\mu )
  2. Another version of the network use MLP (like Autoencoding) due to reconstruct the input-target image. ( L^i _p)

 

So the goal of this two networks will be to minimize the following function:

Loss Function: L^i = L^i_\mu + \rho L^i_p  where \rho  is weighed by a small coefficient p (p=0.005)

 

 

 

Reconstruction and Regularization

 

As we seen in training of weights we will use an MLP as secondary network:

During Training step, we Mask out all except the activity vector of the correct digit capsule (one class predicted) than, while activity vector contains all information from Pose-Matrix, we reconstruct the input image.

 

 

The output of the digit capsule is fed into a decoder (MLP) that model the pixel intensities, then we minimize the sum of squares differences between the output and the pixel intensities.

 

 

 

 

Benchmarking

 

Until now Capsule Networks, while achieved the state-of-art in image classification, are tested only on MNIST data. But there are some relevant considerations to do analysing not only the accuracy achieved but many factors and proprieties of the model.

We compare Capsule Networks with 2 different combination of parameters:

  • number of routing process
  • Reconstruction (without reconstruction Loss Function will become: = )

And one optimized convolutional neural network (Baseline)

 

While Capsule Network achieve the best results the most relevant information is given by:

 

Capsule Networks CNN (Baseline)
Number of Parameters 8.2M 34.4M
Affine Transformation Robustness 79% 66%

 

The number of parameters is more than 4 times lower and the proprieties of equivariance of capsules allow us to achieve best results on affine transformation, due to improve generalization.

 

Summing Capsule Proprieties:

 

  • Allows a hierarchy of parts and spatial information
  • Robustness to Affine Transformations
  • Need less data due to generalizations proprieties
  • High discriminatory capacity on overlapping objects

 

But Caps Net are just not tested on other dataset (large datasets like ImageNet) and routing-by-agreement process slows down the training step