Posts

A Tutorial on Linear Algebra and Geometry ( Part 2)

 Author: Paolo Caressa

 

Points, vectors and their algebra

So far we dealt with points as identified with pairs (or triples for space) of real numbers: as far as Machine Learning is concerned, we are interested infinite sets of points, which represent particular objects to classify or phenomena to correlate as points in a Cartesian space. However, to find out some regular behaviour or pattern for those points, often we are interested in lines, planes and also more complex geometric figures, as conics or quadrics.

We have already seen, by an example, that some geometric figures may also be represented by single equations or systems of equations. Let us bound ourselves to lines and planes to understand in general how to represent them since often those representations are used in Machine Learning.

 

A line is, intuitively speaking, a set of collinear points: one of Euclid’s axioms claims that given two distinct points there is exactly one line passing through them so that to identify uniquely a line two of its points suffice.

To understand how to do that in the Cartesian plane, let us consider two points P ≠ Q, whose coordinates are P=(x1,y1) e Q=(x2,y2). Let us define the vector from P to Q as the pair of real numbers

 

foto13

 

 

Thus a vector is by definition the difference between two points, whereby “difference” we mean the coordinate-wise difference.

 

From the IT point of view, we may imagine a point as an array (or list) with two elements (which are numbers). Usually an array A = [x,y] is indexed starting by 0, thus A[0] = x and A[1] = y. Then, if A and B are arrays, the vector   \overline{AB}  may be represented by the array [B[0] – A[0], B[1] – A[1]].

 

 

Why do we call “vector” a difference of points? Because it is an object which has direction, sense and magnitude, arrays as vectors used in Physics.

Its direction is just the line passing through P and Q; its orientation is from P toward Q (the opposite vector would be    \overline{QP}  which runs from Q toward P: which are its coordinates?)

The magnitude is the distance between points P and Q. Therefore a vector determines a line, and a sense on it: of course there are infinite vectors sharing the same direction (the same line).

 

For example, the line passing through  P= (0,1)  and  Q= (2,-1)  is determined by the vector , but also by the vector  \overline{PQ} which has the same direction but different magnitude.

In general, for each number, a ≠ 0 the vector  (-a,a)  induces this line. We say that two vectors are paralleli if one is multiple of the other one by a non-zero scale factor.

Indeed, another way to determine a line is to pick one of its points   P and to provide a vector   \overline{V} so to be able to express all the line’s points as translated by vectors parallel to \overline{V}.

 

foto 14

 

This is called the parametric equation of the line since it expresses the generic point  on the line via a parameter  which varies through all real numbers.

Notice that we used the operation of the sum between a point and a vector, defined as follows:

 

foto 15

 

 

Although  vectors are useful to describe lines (but also planes and, in higher dimension, “hyperplanes”), they are interesting in themselves because of their algebraic properties, such as:

  1. Vectors may be added to get new vectors according to the rule

foto 16

 

 

foto 17

By a vector space, we mean a set of elements equipped with two operations, sum between elements and multiplication by a number, which satisfies the previous properties shared by vectors in the Cartesian plane as defined as differences of points.

Actually, in Machine Learning, one always uses finite-dimensional vector spaces whose vectors are expressed in coordinates, thus vectors are identified with n-tuples of numbers (their coordinates). This is a major element of confusion for beginners since both points and vectors in the Cartesian space are represented as a pair of numbers.

 

foto 18

 

Actually, in Machine Learning, one always uses finite-dimensional vector spaces whose vectors are expressed in coordinates, thus vectors are identified with n-tuples of numbers (their coordinates). This is a major element of confusion for beginners since both points and vectors in the Cartesian space are represented as a pair of numbers.

However, even if they are represented in the same way, points and vectors are conceptually different objects. A point identifies a single point in the space, while a vector identifies a displacement along a certain direction, with given sense and magnitude: points and vectors may be put in one-to-one correspondence as follows: To a point P it is associated the vector \overline{OP}  which starts from the origin and ends in P; to a vector  \overline{V} instead, we may associate the point 0 +\overline{V} .

 

 

This distinction between points and vectors is often overlooked but t is nevertheless important, also because it helps in understanding application of the theory: for example, in the following picture some words of a document corpus are represented, mapped by means of a Machine Learning algorithm to points in the plane:

 

FOTO 19

 

These are points representing words. What about vectors in this case?

Let us take the vector which displaces “pizza” to “Italy” and let us apply it to the point “sushi”: we vet the point “Japan” (up to a certain approximation). We infer that that vector represents an answer to a question: “given a national dish which is the corresponding nation?”.

In this case, it is clear that points and vectors represent distinct concepts.

 

 

FOTO 20

 

By applying the “arccosine” function, it is possible to compute this angle starting from the scalar product between two vectors and from their lengths: the geometrical meaning of angle for this quantity is explained via the trigonometric interpretation of the cosine function.

 

 

Recall that  cos\theta   is a number greater or equal to -1  and less or equal  1  to , such that:

  • If the two vectors have the same direction and the same sense then the cosine is equal to 1.
  • If the two vector has the same direction ut opposite since then the cosine is equal to -1 .
  • If the cosine is equal to zero, the two vectors are said to be orthogonal.

 

foto 21

Therefore, while the distance measures the nearness of two points, which the more are near the more their distance is zero, the cosine measures the similarity among directions and senses of the two vectors, so that the more the two vectors are aligned on the same line the more the absolute value of their cosine is near to one.

This cosine similarity is often employed in Machine Learning to classify objects in vector spaces.

 

The N-dimensional Cartesian space

So far, in the crash course on Cartesian geometry and linear algebra, we bounded ourselves to the case of dimension N=2 and N=3, so to be able to draw pictures and to develop the concept in a somewhat familiar environment: but in real life applications, the dimension of vector spaces may be quite high.

In those cases, we have to give up to geometric intuition, although all formulas and concept developed so far generalize trivially to the case of higher dimensions.

Indeed such formulas depend on sum and subtractions of points coordinates and vector components: if those coordinates are 2 or 20,000 it makes no real difference. Better still, all the theory is easily and efficiently implemented on a computer, which may deal with points and vectors in spaces of high dimensions without problems, since they are represented as arrays of numbers.

For example, let us consider the concept of a hyperplane: in the plane, this coincides with the concept of line and in the space with the concept of a plane.

Let us fix a dimension N and let us consider the Cartesian space RN.

A line in this space is, as in dimension 2, determined by a pair of points, or by a point and a vector: the parametric equation is just the same as in the dimension 2 case.

If N > 2 we may also consider parametric equations of the following form:

 

foto 22

In this case, we have two parameters which vary independently, so that, intuitively, the set of points X which satisfy this equation, when a and b vary in all real numbers, corresponds to the set of pairs (a,b) thus to the plane. In particular, it is a bidimensional object.

 

Actually this is not always true: for example if  \overline{v}= \overline{w} then the parametric equation becomes

foto 23

which actually describes a line.

Therefore, if we write a parametric equation with many parameters, the dimension of the set of points described by this equation depends on the relations between the vectors which appear in the parametric equation itself. In the previous case, if vectors   \overline{v}  and \overline{w}  are parallel then the equation represents a line and not a plane.

If in general, in the N-dimensional space, we write a parametric equation with N – 1 parameters, we get:

 

foto 24

 

foto 25

Matrices and their algebra

A major feature of linear algebra is the efficiency and universality of its numerical methods. Actually, it suffices to implement a single algorithm (or one of its variants), namely Gauss’ elimination, to be able to do practically everything in an effective way (solving equations, checking linear independence, etc.). These algorithms are available in each standard numerical computation library, such as Python’s numpy.linalg.

To close this tutorial (already far too long) it is worth to introduce the key notion which is involved in each of those algorithms, and which is also crucial in the conceptual developments of linear algebra: the concept of a matrix.

A matrix is just a table of numbers, and it may be considered as a bidimensional array. More formally, an n×m matrix is a table of numbers which are singly addressed by means of two indexes, i and j, where the first one addresses the row and the second one addresses the column. At the crossing of row I and column j, there is the number  which is pointed by those indexes (notice that in mathematics indexes usually runs from 1 on, not from 0 on as in computer science).

The tabular representation of a matrix is as follows:

foto 26

A matrix such as n = m is said to be a square matrix.

In practice, a matrix is just a vector of length nm whose elements are displayed by rows and not as a single sequence. However, this notation change is fundamental to use those objects.

In particular, matrices enrich vector algebra with a new operation, a multiplication. In the first place let us notice that we may add matrices and multiply them by a single number to get again matrices of the same type:

foto 27

 

Therefore, n×m matrices do form a vector space of dimension nm.

Recall that a vector may be multiplied by a number, to get a new vector and that two vectors may be multiplied to get a number (via the dot product). But we do not know how to multiply, say, a vector by itself to get a new vector.

If we write vector as matrices we can actually multiply them: indeed we may distinguish two kinds of vectors when written in matrix form, row vectors and column vectors. A row vector is a sequence of numbers, we have already met them, for example (1,2,0).

 

A column vector is a vector written from top to bottom as in

 

foto 28

 

At first sight, this is just a matter of notation, but actually, if we interpret a vector as a particular kind of matrix, a row vector is a 1×N matrix, while a column vector is an N×1 vector.

Now, given an n×m matrix A and n×r matrix B, we may multiply A by B to get a new n×r matrix. The entry in the matrix AB addressed by indexes i and j is defined as:

 

foto 29

 

Notice that this is the dot product of the row vector given by the i-th row in A by the column vector given by the j-th column in B.

Example: let us multiply a 2×3 matrix times a 2×3 matrix:

 

 

foto 30

 

Now we come back to vectors: we may multiply a row vector times a column vector to get a 1×1 matrix (which is just a number) and this is the dot product. But we can also multiply an N×1 column vector times a 1×N row vector as matrices and get an N×N matrix, as in:

 

foto 31

 

However in this way, we multiply two vectors belonging to N-dimensional spaces and we get a vector in a vector space with a different dimension, either 1 or N×N.

The identity matrix is the square matrix whose entries are zero but for the one on the diagonal which are 1 (diagonal elements in a matrix are the ones with the row index equal to the column index: a_i_i ). For example the 3,,×3 identity matrix is

 

foto 32

 

As the name suggests, on multiplying a matrix A times the identity matrix still we get A. Moreover, the matrix product is both associative and distributive with respect to matrix sum.

However, matrix algebra displays a particular and interesting feature: the matrix product is not commutative in general. Thus AB is generally different from BA (indeed BA may well be meaningless, for example, if n m).

For example:

foto 33

 

 

Another typical operation is the multiplication of an n×m matrix times a column vector with m components: the result is a column vector with n components.

Functional programming for deep learning

Author: Joyce Xu

Before I started my most recent job at ThinkTopic, the concepts of “functional programming” and “machine learning” belonged to two different worlds entirely. One was a programming paradigm surging in popularity as the world turned towards simplicity, composability, and immutability to maintain complex scaling applications; the other was a tool to teach computers to autocomplete doodles and make music. Where was the overlap?

The more I worked with the two, the more I began realizing that the overlap is both practical and theoretical. Firstly, machine learning is not a stand-alone endeavor; it needs to be rapidly incorporated into complex scaling applications in industry. Secondly, machine learning — and deep learning in particular — is functional by design. Given the right ecosystem, there are several compelling reasons to perform deep learning in an entirely functional manner:

  • Deep learning models are compositionalFunctional programming is all about composing chains of higher-order functions to operate over simple data structures. Neural nets are designed the same way, chaining together function transformations from one layer to the next to operate over a simple matrix of input data. In fact, the entire process of deep learning can be viewed as optimizing a set of composed functions, meaning the models themselves are intrinsically functional.
  • Deep learning components are immutable.When functions operate over the input data, the data is not changed, a new set of values are outputted and passed on. Furthermore, when weights are updated, they do not need to be “mutated” — they can just be replaced by a new value. In theory, the updates to the weights can be applied in any order (i.e. they are not dependent on one another), so there is no need to keep track of a sequential, mutable state.
  • Functional programming offers easy parallelism. Most importantly, functions that are pure and composable are easy to parallelize. Parallelism means more speed and more compute power. Functional programming gives us concurrency and parallelism at essentially no cost, making it much easier to work with large, distributed models in deep learning.

There are many theories and perspectives regarding the combination of functional programming and deep learning, from mathematical arguments to practical overviews, but sometimes it’s most convincing (and useful) just to see it in practice. Here at ThinkTopic, we’ve been developing an open-source machine learning library called Cortex. For the rest of this post, I will introduce some ideas behind functional programming and put them to use in a Cortex deep learning model for anomaly detection.

Clojure Basics

Before we continue on our Cortex tutorial, I want to introduce some basics of Clojure. Clojure is a functional programming language that’s really good at two things: concurrency and data processing. Fortunately for us, both of those things are incredibly useful for machine learning. In fact, one of the primary reasons we use Clojure for machine learning is the fact that day-to-day work in preparing datasets for training (data manipulation, processing, etc.) can easily outweigh the work of implementing the algorithms, especially when we have a solid library such as Cortex for learning. Using Clojure and .edn (instead of C++ and protobuf), we can gain leverage and velocity on ML projects.

For a more in-depth introduction to the language, take a look at the community guide here.

On with the basics: Clojure code is made up of a bunch of expressionsthat are evaluated at run-time. These expressions are wrapped in parentheses, and are typically treated as function calls.

(+ 2 3)          ; => 5
(if false 1 0)   ; => 0

There are 4 basic collection data structures: vectors, lists, hash-maps, and sets. Commas are treated as whitespace, so they are typically omitted.

[1 2 3]            ; vector (ordered)
‘(1 2 3)           ; list (ordered)
{:a 1 :b 2 :c 3}   ; hashmap or map (unordered)
#{1 2 3}           ; set (unordered, unique values)

The single quote in front of the list simply prevents it from being evaluated as an expression.

Clojure also comes with many, many, built-in functions to operate over these data structures. Part of the beauty of Clojure is that it was designed to have many functions for very few data types, as opposed to to having a few specialized functions for each of many data types. Being an FP language, Clojure supports higher-order functions, meaning functions can be passed around as arguments to other functions.

(count [a b c])              ; => 3

(range 5)                    ; => (0 1 2 3 4)

(take 2 (drop 5 (range 10))) ; => (5 6)

(:b {:a 1 :b 2 :c 3})        ; use keyword as function => 2

(map inc [1 2 3])            ; map and increment => (2 3 4)

(filter even? (range 5))     ; filter collection based off predicate => (0 2 4)

(reduce + [1 2 3 4])         ; apply + to first two elements, then apply + to that result and the 3rd element, and so forth => 10

Of course, we can also write our own functions in Clojure, using defn. Clojure function definitions follow the form (defn fn-name [params*] expressions), and they always return the value of the last expression in the body.

(defn add2
[x]
(+ x 2))(add2 5)     ; => 7

let expressions create and bind variables within the lexical scope of the “let”. That is, in of the expression (let [a 4] (…)), the variable “a” takes on a value of 4 inside (and only inside) the inner parentheses. These variables are called “locals.”

(defn square-and-add
[a b]
(let [a-squared (* a a)
b-squared (* b b)]
(+ a-squared b-squared)))

(square-and-add 3 4)       ; => 225

That’s it for the basics! Now that we’ve learned some Clojure, let’s put the fun in functional programming and get back to some ML.

Cortex

Cortex is written in Clojure, and is currently one of the largest and fastest-growing machine learning libraries that uses a functional programming language. The rest of this post will walk through how to build a state-of-the-art classification model in Cortex, and the functional programming paradigms and data augmentation techniques required to do so.

Data Preprocessing

Our dataset is going to be the credit card fraud detection data provided by Kaggle here. It turns out this dataset is incredibly imbalanced, containing only 492 positive fraud cases out of 284,807. That’s 0.172%. This is going to cause problems for us later, but first let’s just take a look at the data and see how the model does.

In order to ensure anonymity of personal data, all the original features except “time” and “amount” have already been transformed to PCA components (where each entry represents a new variable that contains the most relevant information from the raw data). A little data exploration will show that the first “time” variable is fairly uninformative, so we’ll drop that as we’re reading in the data. Here is what our initial code looks like:

 

(ns fraud-detection.core

(:require [clojure.java.io :as io]

[clojure.string :as string] [clojure.data.csv :as csv] [clojure.core.matrix :as mat] [clojure.core.matrix.stats :as matstats]

[cortex.nn.layers :as layers]

[cortex.nn.network :as network]

[cortex.nn.execute :as execute]

[cortex.optimize.adadelta :as adadelta]

[cortex.optimize.adam :as adam]

[cortex.metrics :as metrics]

[cortex.util :as util]

[cortex.experiment.util :as experiment-util]

[cortex.experiment.train :as experiment-train]))

(def orig-data-file “resources/creditcard.csv”)

(def log-file “training.log”)

(def network-file “trained-network.nippy”)

;; Read input csv and create a vector of maps {:data […] :label [..]},

;; where each map represents one training instance in the data

(defonce create-dataset

(memoize

(fn []

(let [credit-data (with-open [infile (io/reader orig-data-file)]

(rest (doall (csv/read-csv infile))))

data (mapv #(mapv read-string %) (map #(drop 1 %) (map drop-last credit-data))) ; drop label and time

labels (mapv #(util/idx->one-hot (read-string %) 2) (map last credit-data))

dataset (mapv (fn [d l] {:data d :label l}) data labels)]

dataset))))

 

Cortex neural nets expect input data in the form of maps, such that each map represents a single labeled data point. For example, a classification dataset could look like [{:data [12 10 38] :label “cat”} {:data [20 39 3] :label “dog“} … ]. In our create-dataset function, we read in the csv data file, designate all but the last column to be the “data” (or features), and designate the last column to be the labels. In the process, we turn the labels into one-hot vectors (e.g. [0 1 0 0]) based on the classification class, because the last softmax layer of our neural net returns a vector of class probabilities, not the actual label. Finally, we create a map from these two variables and return it as the dataset.

Model Description

Creating a model in Cortex is fairly straightforward. First, we’re going to define a map of hyper-parameters to be used later during training. Then, to define a model, we simply string the layers together:

 

(def params

{:test-ds-size      50000 ;; total = 284807, test-ds ~= 17.5%

:optimizer         (adam/adam)   ;; alternately, (adadelta/adadelta)

:batch-size        100

:epoch-count       50

:epoch-size        200000})

(def network-description

[(layers/input (count (:data (first (create-dataset)))) 1 1 :id :data) ;width, height, channels, args

(layers/linear->relu 20) ; num-output & args

(layers/dropout 0.9)

(layers/linear->relu 10)

(layers/linear 2)

(layers/softmax :id :label)])

network-description is a vector of neural network layers. Our model consists of:

  • an input layer
  • a fully-connected (linear) layer with the ReLU activation function
  • a dropout layer
  • another fully-connected ReLU layer
  • an output layer of size 2 that is passed through the softmax function.

In both the first and the last layers, we need to specify an :id. This id refers to the key in the data map that our network should look at. (Recall that the data map looks like {:data […] :label […]}). For our input layer, we pass in the :data id to tell the model to grab the training data for its forward passes. In our final network layer, we provide :label as the :id, so the model can use the true label to calculate our error with.

Training and Evaluation

Here’s where it gets a little more difficult. The train function itself is actually not so complicated — Cortex provides a nice, high-level call for training, so all we have to do is pass in our parameters (the network, training and testing dataset, etc.). The only caveat is that that the system expects an effectively “infinite” dataset for training, but Cortex provides a function (infinite-class-balanced-dataset) to help us transform it.

(defn train

“Trains network for :epoch-count number of epochs”

[]

(let [network (network/linear-network network-description)

[train-orig test-ds] (get-train-test-dataset)

train-ds (experiment-util/infinite-class-balanced-dataset train-orig

:class-key :label

:epoch-size (:epoch-size params))]

(experiment-train/train-n network train-ds test-ds

:batch-size (:batch-size params)

:epoch-count (:epoch-count params)

:optimizer (:optimizer params)

:test-fn f1-test-fn)))

The complicated part is the f1-test-fn. Here’s the thing: during training, the train-n function expects to be provided with a :test-fnthat evaluates how well the model is performing and determines whether or not it should be saved as the “best network.” There is a default test function that evaluates cross-entropy loss, but this loss value is not so easy to interpret, and it doesn’t suit our imbalanced dataset very well. To get around this problem, we’re going to write our own test function.

But how are we going to test the performance of the model? The standard metric in classification tasks is accuracy, but in a dataset as imbalanced as ours, accuracy is a fairly useless metric. Because positive (fraudulent) examples account for just 0.172% of our dataset, even a model that exclusively predicts negative examples would achieve 99.828% accuracy. 99.828% is a pretty darn good accuracy, but if Amazon really used this model, we may as well all turn to a life of crime and credit card fraud.

Thankfully, Amazon does not use this kind of model, and neither shall we. A much more telling set of metrics is precision, recall, and the F1 (or more generally F-beta) score.

1 1In layman’s terms, precision asks the question: “of all the examples I guessed were positive, what proportion were actually positive?” and recall asks the question: “of all the examples that were actually positive, what proportion did I correctly guess as positive?”

The F-beta score (a generalization of the traditional F1 score) is a weighted average of precision and recall, also measured on a scale of 0 to 1:

2.jpg 2

When beta = 1, we get the standard F1 measure of 2 * (precision * recall) / (precision + recall). In general, beta represents how many times more important recall should be than precision. For our fraud detection model, we’ll use the F1 score as our high score to track, but we’ll log the precision and recall scores as well to check the balance. This is our f1-test-fn:

 

(defn f-beta

“F-beta score, default uses F1”

([precision recall] (f-beta precision recall 1))

([precision recall beta]

(let [beta-squared (* beta beta)]

(* (+ 1 beta-squared)

(try                         ;; catch divide by 0 errors

(/ (* precision recall)

(+ (* beta-squared precision) recall))

(catch ArithmeticException e

0))))))

(defn f1-test-fn

“Test function that takes in two map arguments, global info and local epoch info.

Compares F1 score of current network to that of the previous network,

and returns map:

{:best-network? boolean

:network (assoc new-network :evaluation-score-to-compare)}”

[;; global arguments

{:keys [batch-size context]}

;per-epoch arguments

{:keys [new-network old-network test-ds]} ]

(let [batch-size (long batch-size)

test-results (execute/run new-network test-ds

:batch-size batch-size

:loss-outputs? true

:context context)

;;; test metrics

test-actual (mapv #(vec->label [0.0 1.0] %) (map :label test-ds))

test-pred (mapv #(vec->label [0.0 1.0] % [1 0.9]) (map :label test-results))

precision (metrics/precision test-actual test-pred)

recall (metrics/recall test-actual test-pred)

f-beta (f-beta precision recall)

;; if current f-beta higher than the old network’s, current is best network

best-network? (or (nil? (get old-network :cv-score))

(> f-beta (get old-network :cv-score)))

updated-network (assoc new-network :cv-score f-beta)

epoch (get new-network :epoch-count)]

(experiment-train/save-network updated-network network-file)

(log (str “Epoch: ” epoch “\n”

“Precision: ” precision  “\n”

“Recall: ” recall “\n”

“F1: ” f-beta “\n\n”))

{:best-network? best-network?

:network updated-network}))

The function runs the current network on the test set, calculates the F1 score, and updates/saves the network accordingly. It also prints out our evaluation metrics at each epoch. If we run (train) in the REPL now, we get a high score that something that looks like this:

Epoch: 30
Precision: 0.2515923566878981
Recall: 0.9186046511627907
F1: 0.395

Haha. That’s pretty embarrassingly bad.

Data Augmentation

Here’s the problem. Remember how I said our highly imbalanced dataset was going to cause issues for us later? The model currently does not have enough positive examples to learn from. When we call experiment-util/infinite-class-balanced-dataset in our train function, we’re actually creating hundreds of copies of each positive training instance to balance out the dataset. As a result, the model is effectively memorizing those feature values and not actually learning the distinction between the classes.

One way around this problem is through data augmentation, in which we generate additional, artificial data based on the examples we already have. In order to create realistic positive training examples, we are going to add random amounts of noise to the feature vectors of each of our existing positive examples. The amount of noise we add will be dependent on the variance of each feature across the positive class, such that features with a large variance will be augmented with a large amount of noise, and vice versa for features with small variances.

Here is our code for data augmentation:

(defonce get-scaled-variances

(memoize

(fn []

(let [{positives true negatives false} (group-by #(= (:label %) [0.0 1.0]) (create-dataset))

pos-data (mat/matrix (map #(:data %) positives))

variances (mat/matrix (map #(matstats/variance %) (mat/columns pos-data)))

scaled-vars (mat/mul (/ 5000 (mat/length variances)) variances)]

scaled-vars))))

(defn add-rand-variance

“Given vector v, add random vector based off the variance of each feature”

[v scaled-vars]

(let [randv (map #(- (* 2 (rand %)) %) scaled-vars)]

(mapv + v randv)))

(defn augment-train-ds

“Takes train dataset and augments positive examples to reach 50/50 balance”

[orig-train]

(let [{train-pos true train-neg false} (group-by #(= (:label %) [0.0 1.0]) orig-train)

pos-data (map #(:data %) train-pos)

num-augments (- (count train-neg) (count train-pos))

augments-per-sample (int (/ num-augments (count train-pos)))

augmented-data (apply concat (repeatedly augments-per-sample

#(mapv (fn [p] (add-rand-variance p (get-scaled-variances))) pos-data)))

augmented-ds (mapv (fn [d] {:data d :label [0 1]}) augmented-data)]

(shuffle (concat orig-train augmented-ds))))

augment-train-ds takes our original train dataset, calculates the number of augmentations that have to be made to reach a 50/50 class balance, and applies those augmentations to our existing samples by adding a random noise vector (add-rand-variance) based on the allowed variance (get-scaled-variances). In the end, we concatenate the augmented examples back in to the original dataset and return the balanced dataset.

During training, the model will be seeing an unrealistically large amount of positive examples, while the test set will still be only 0.172% positives. As a result, while the model may be able to learn the differences between the two classes better, it will over-predict positive examples during testing. In order to fix this, we can require a higher threshold of certainty to predict “positive” during testing. In other words, instead of requiring the model to be at least 50% certain that an example is positive in order to classify it as such, we can require it to be at least 70% certain. After some testing, I found the optimal value to be set at 90%. The code for this can be found in the vec->label function in the source code, and is called on line 31 of the f1-test-fn.

Using the new, augmented dataset for training, our high scores look something like this:

Epoch: 25
Precision: 0.8658536585365854
Recall: 0.8255813953488372
F1: 0.8452380952380953

Much better!

Conclusion

As always, the model can still be improved. Here are a few ideas for next steps:

  • Are all the PCA features informative? Take a look at the distribution of values for positive and negative examples across the features, and drop any features that do not help distinguish between the two classes.
  • Are there other neural net architectures, activation functions, etc. that perform better?
  • Are there different data augmentation techniques that would perform better?
  • How does model performance in Cortex compare to Keras/Tensorflow/Theano/Caffe?

The source code for the project can be found in its entirety here. I encourage you to try some of these next steps, test out new datasets, and explore different network architectures (we have a great image classification example for reference on conv nets). Cortex is pushing towards its 1.0 release, so if you have any thoughts, recommendations, or feedback, be sure to let us know. Happy hacking!

 

Deep Learning Italia – Roma – Luglio 2018

1) Francesco Pugliese – Co-Founder Deep Learning Italia | Ricercato ISTAT
“Introduzione divulgativa alle Reti Neurali e al Deep Learning”.
Abstract: Descrizione introduttiva del modello del neurone artificiale e confronto con il neurone biologico, la storia delle reti neurali e come si è arrivati oggi al Deep Learning, detrattori vs sostenitori delle reti neurali e vittoria dei sostenitori (Hinton, LeCunn, Benjo). I successi del Deep Learning e la disfatta del Machine Learning tradizionale di oggi in settori come Computer Vision, Natural Language Processing e Giochi (GO, Chess, ecc.).

2) Matteo Testi- Founder Deep Learning Italia
“Introduzione alla piattaforma Deep Learning Italia”
Abstract: Illustrazione della community Deep Learning Italia e degli strumenti oggi sviluppati e a disposizione della community. Introduzione alle features del sito www.deeplearningitalia.com: i tutorials, il question & answer, le references, gli sviluppi futuri.

3) Ayadi Ala Eddine – Data Scientist at InstaDeep UK – AI researcher intern at Università degli Studi di Padova on deep reinforcement learning and a kaggle expert with more than 3 years of experience in data science, machine learning and statistics by working on real-life problems, a passion holder for deploying predictive models and deep learning techniques.
“Generative Adversarial Networks – Tensorflow to build GAN’s”
Abstract: GANs has been one of the most interesting developments in deep learning and machine learning recently. Through an innovative combination of computational graphs and game theory, we are going to show you how two models fighting against each other would be
able to co-train and generate new samples. Finally, we will end up with a demo where I will show you some cool stuff people have done using GAN and give you links to some of the important resources for getting deeper into these techniques.

 

https://www.eventbrite.it/e/biglietti-meetup-aperitech-di-deep-learning-italia-46881039451

Deep Learning for Object Detection: A Comprehensive Review

Author Joyce Xu

 

 

 

1

 

With the rise of autonomous vehicles, smart video surveillance, facial detection and various people counting applications, fast and accurate object detection systems are rising in demand. These systems involve not only recognizing and classifying every object in an image, but localizing each one by drawing the appropriate bounding box around it. This makes object detection a significantly harder task than its traditional computer vision predecessor, image classification.

Fortunately, however, the most successful approaches to object detection are currently extensions of image classification models. A few months ago, Google released a new object detection API for Tensorflow. With this release came the pre-built architectures and weights for a few specific models:

In my last blog post, I covered the intuition behind the three base network architectures listed above: MobileNets, Inception, and ResNet. This time around, I want to do the same for Tensorflow’s object detection models: Faster R-CNN, R-FCN, and SSD. By the end of this post, we will hopefully have gained an understanding of how deep learning is applied to object detection, and how these object detection models both inspire and diverge from one another.

Faster R-CNN

Faster R-CNN is now a canonical model for deep learning-based object detection. It helped inspire many detection and segmentation models that came after it, including the two others we’re going to examine today. Unfortunately, we can’t really begin to understand Faster R-CNN without understanding its own predecessors, R-CNN and Fast R-CNN, so let’s take a quick dive into its ancestry.

R-CNN

R-CNN is the grand-daddy of Faster R-CNN. In other words, R-CNN reallykicked things off.

R-CNN, or Region-based Convolutional Neural Network, consisted of 3 simple steps:

  1. Scan the input image for possible objects using an algorithm called Selective Search, generating ~2000 region proposals
  2. Run a convolutional neural net (CNN) on top of each of these region proposals
  3. Take the output of each CNNand feed it into a) an SVM to classify the region and b) a linear regressor to tighten the bounding box of the object, if such an object exists.

These 3 steps are illustrated in the image below:

2

2

In other words, we first propose regions, then extract features, and then classify those regions based on their features. In essence, we have turned object detection into an image classification problem. R-CNN was very intuitive, but very slow.

Fast R-CNN

R-CNN’s immediate descendant was Fast-R-CNN. Fast R-CNN resembled the original in many ways, but improved on its detection speed through two main augmentations:

  1. Performing feature extraction over the image beforeproposing regions, thus only running one CNN over the entire image instead of 2000 CNN’s over 2000 overlapping regions
  2. Replacing the SVM with a softmax layer, thus extending the neural network for predictions instead of creating a new model

The new model looked something like this:

 

3 1

As we can see from the image, we are now generating region proposals based on the last feature map of the network, not from the original image itself. As a result, we can train just one CNN for the entire image.

In addition, instead of training many different SVM’s to classify each object class, there is a single softmax layer that outputs the class probabilities directly. Now we only have one neural net to train, as opposed to one neural net and many SVM’s.

Fast R-CNN performed much better in terms of speed. There was just one big bottleneck remaining: the selective search algorithm for generating region proposals.

Faster R-CNN

At this point, we’re back to our original target: Faster R-CNN. The main insight of Faster R-CNN was to replace the slow selective search algorithm with a fast neural net. Specifically, it introduced the region proposal network (RPN).

Here’s how the RPN worked:

  • At the last layer of an initial CNN, a 3×3 sliding window moves across the feature map and maps it to a lower dimension (e.g. 256-d)
  • For each sliding-window location, it generates multiplepossible regions based on k fixed-ratio anchor boxes(default bounding boxes)
  • Each region proposal consists of a) an “objectness” score for that region and b) 4 coordinates representing the bounding box of the region

In other words, we look at each location in our last feature map and consider kdifferent boxes centered around it: a tall box, a wide box, a large box, etc. For each of those boxes, we output whether or not we think it contains an object, and what the coordinates for that box are. This is what it looks like at one sliding window location:

4 1

4 1

The 2k scores represent the softmax probability of each of the k bounding boxes being on “object.” Notice that although the RPN outputs bounding box coordinates, it does not try to classify any potential objects: its sole job is still proposing object regions. If an anchor box has an “objectness” score above a certain threshold, that box’s coordinates get passed forward as a region proposal.

Once we have our region proposals, we feed them straight into what is essentially a Fast R-CNN. We add a pooling layer, some fully-connected layers, and finally a softmax classification layer and bounding box regressor. In a sense, Faster R-CNN = RPN + Fast R-CNN.

 

5 1

5 1

Altogether, Faster R-CNN achieved much better speeds and a state-of-the-art accuracy. It is worth noting that although future models did a lot to increase detection speeds, few models managed to outperform Faster R-CNN by a significant margin. In other words, Faster R-CNN may not be the simplest or fastest method for object detection, but it is still one of the best performing. Case in point, Tensorflow’s Faster R-CNN with Inception ResNet is their slowest but most accurate model.

At the end of the day, Faster R-CNN may look complicated, but its core design is the same as the original R-CNN: hypothesize object regions and then classify them. This is now the predominant pipeline for many object detection models, including our next one.

R-FCN

Remember how Fast R-CNN improved on the original’s detection speed by sharing a single CNN computation across all region proposals? That kind of thinking was also the motivation behind R-FCN: increase speed by maximizing shared computation.

R-FCN, or Region-based FullyConvolutional Net, shares 100% of the computations across every single output. Being fully convolutional, it ran into a unique problem in model design.

On the one hand, when performing classification of an object, we want to learn location invariance in a model: regardless of where the cat appears in the image, we want to classify it as a cat. On the other hand, when performing detection of the object, we want to learn location variance: if the cat is in the top left-hand corner, we want to draw a box in the top left-hand corner. So if we’re trying to share convolutional computations across 100% of the net, how do we compromise between location invariance and location variance?

R-FCN’s solution: position-sensitive score maps.

Each position-sensitive score map represents one relative position of one object class. For example, one score map might activate wherever it detects the top-right of a cat. Another score map might activate where it sees the bottom-left of a car. You get the point. Essentially, these score maps are convolutional feature maps that have been trained to recognize certain parts of each object.

Now, R-FCN works as follows:

  1. Run a CNN (in this case, ResNet) over the input image
  2. Add a fully convolutional layer to generate a score bankof the aforementioned “position-sensitive score maps.” There should be k²(C+1) score maps, with k² representing the number of relative positions to divide an object (e.g. 3² for a 3 by 3 grid) and C+1 representing the number of classes plus the background.
  3. Run a fully convolutional region proposal network (RPN) to generate regions of interest (RoI’s)
  4. For each RoI, divide it into the same k² “bins” or subregions as the score maps
  5. For each bin, check the score bank to see if that bin matches the corresponding position of some object. For example, if I’m on the “upper-left” bin, I will grab the score maps that correspond to the “upper-left” corner of an object and average those values in the RoI region. This process is repeated for each class.
  6. Once each of the k² bins has an “object match” value for each class, average the bins to get a single score per class.
  7. Classify the RoI with a softmax over the remaining C+1 dimensional vector

Altogether, R-FCN looks something like this, with an RPN generating the RoI’s:

6 1

6 1

Even with the explanation and the image, you might still be a little confused on how this model works. Honestly, R-FCN is much easier to understand when you can visualize what it’s doing. Here is one such example of an R-FCN in practice, detecting a baby:

 

 

 

 

7

Simply put, R-FCN considers each region proposal, divides it up into sub-regions, and iterates over the sub-regions asking: “does this look like the top-left of a baby?”, “does this look like the top-center of a baby?” “does this look like the top-right of a baby?”, etc. It repeats this for all possible classes. If enough of the sub-regions say “yes, I match up with that part of a baby!”, the RoI gets classified as a baby after a softmax over all the classes.

With this setup, R-FCN is able to simultaneously address location variance by proposing different object regions, and location invariance by having each region proposal refer back to the same bank of score maps. These score maps should learn to classify a cat as a cat, regardless of where the cat appears. Best of all, it is fully convolutional, meaning all of the computation is shared throughout the network.

As a result, R-FCN is several times faster than Faster R-CNN, and achieves comparable accuracy.

SSD

Our final model is SSD, which stands for Single-Shot Detector. Like R-FCN, it provides enormous speed gains over Faster R-CNN, but does so in a markedly different manner.

Our first two models performed region proposals and region classifications in two separate steps. First, they used a region proposal network to generate regions of interest; next, they used either fully-connected layers or position-sensitive convolutional layers to classify those regions. SSD does the two in a “single shot,” simultaneously predicting the bounding box and the class as it processes the image.

Concretely, given an input image and a set of ground truth labels, SSD does the following:

  1. Pass the image through a series of convolutional layers, yielding several sets of feature maps at different scales (e.g. 10×10, then 6×6, then 3×3, etc.)
  2. For each location in eachof these feature maps, use a 3×3 convolutional filter to evaluate a small set of default bounding boxes. These default bounding boxes are essentially equivalent to Faster R-CNN’s anchor boxes.
  3. For each box, simultaneously predict a) the bounding box offset and b) the class probabilities
  4. During training, match the ground truth box with these predicted boxes based on IoU. The best predicted box will be labeled a “positive,” along with all other boxes that have an IoU with the truth >0.5.

SSD sounds straightforward, but training it has a unique challenge. With the previous two models, the region proposal network ensured that everything we tried to classify had some minimum probability of being an “object.” With SSD, however, we skip that filtering step. We classify and draw bounding boxes from every single position in the image, using multiple different shapes, at several different scales. As a result, we generate a much greater number of bounding boxes than the other models, and nearly all of the them are negative examples.

To fix this imbalance, SSD does two things. Firstly, it uses non-maximum suppression to group together highly-overlapping boxes into a single box. In other words, if four boxes of similar shapes, sizes, etc. contain the same dog, NMS would keep the one with the highest confidence and discard the rest. Secondly, the model uses a technique called hard negative mining to balance classes during training. In hard negative mining, only a subset of the negative examples with the highest training loss (i.e. false positives) are used at each iteration of training. SSD keeps a 3:1 ratio of negatives to positives.

Its architecture looks like this:

 

8

As I mentioned above, there are “extra feature layers” at the end that scale down in size. These varying-size feature maps help capture objects of different sizes. For example, here is SSD in action

9

In smaller feature maps (e.g. 4×4), each cell covers a larger region of the image, enabling them to detect larger objects. Region proposal and classification are performed simultaneously: given p object classes, each bounding box is associated with a (4+p)-dimensional vector that outputs 4 box offset coordinates and pclass probabilities. In the last step, softmax is again used to classify the object.

Ultimately, SSD is not so different from the first two models. It simply skips the “region proposal” step, instead considering every single bounding box in every location of the image simultaneously with its classification. Because SSD does everything in one shot, it is the fastest of the three models, and still performs quite comparably.

Conclusion

Faster R-CNN, R-FCN, and SSD are three of the best and most widely used object detection models out there right now. Other popular models tend to be fairly similar to these three, all relying on deep CNN’s (read: ResNet, Inception, etc.) to do the initial heavy lifting and largely following the same proposal/classification pipeline.

At this point, putting these models to use just requires knowing Tensorflow’s API. Tensorflow has a starter tutorial on using these models here. Give it a try, and happy hacking!

 

 

Andrew Ng 175M

Google TPU

The GTX 745 and the tensorflow – gpu installation on Windows

Authoress: Eleonora Bernasconi

 

NVIDIA GeForce GTX 745 Graphics Card specifications

Specifications:https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-745-oem/specifications

CUDA Cores: 384

Base Clock (MHz): 1033

Memory Clock: 1.8 Gbps

Standard Memory Config: 4 GB

Memory Interface: DDR3

Memory Bandwidth (GB/sec): 28.8

 

Figure 01 – nvidia-smi for GPU monitoring

Open the command prompt and insert:

cd C:\Program Files\NVIDIA Corporation\NVSMI

nvidia-smi

N.B. The percentage of use of the GPU ranges between 92% and 94%, in the Windows Task Manager it remains at 70%.

Installing TensorFlow with GPU on Windows 10

Requirements

Python 3.5

Nvidia CUDA GPU. Make sure you do have a CUDA-capable NVIDIA GPU on your system.

Setting up the Nvidia GPU card

Install Cuda Toolkit 8.0 e cuDNN v5.1.

Download and install CUDA Toolkit

Toolkit 8.0 https://developer.nvidia.com/cuda-downloads

Example installation directory: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0

Download and install cuDNN

Install cuDNN version 5.1 for Windows 10:https://developer.nvidia.com/cudnn

Extract the cuDNN files and enter them in the Toolkit directory.

Environment variables

Make sure after installing CUDA toolkit, that CUDA_HOME is set in the environment variables, otherwise add them manually.

Figure 02 – Environmet variables CUDA_HOME parte 01

 

Figure 03 – Environmet variables CUDA_HOME parte 02

Install Anaconda

Download : https://www.anaconda.com/download/

Create a new environment with the name tensorflow-gpu and the python version 3.5.2

conda create -n tensorflow-gpu python=3.5.2

N.B. If you find that you have incompatible versions, turn on these commands to resolve the problem:

conda install -c conda-forge tensorflow-gpu

Anaconda will automatically install the required versions of cuda, cudNN and other packages.

Figure 04 – conda install -c conda-forge tensorflow-gpu

activate tensorflow-gpu

Figure 05 – activate tensorflow-gpu

 

Install tensorFlow

pip install tensorflow-gpu

Figure 06 – pip install tensorflow-gpu

Now you are done and you have successfully installed tensorflow with the GPU!

Remember to activate the command: activate tensorflow-gpu to get into GPU mode!

Test GPU

python

import tensorflow as tf

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

 

Figure 07 – test GPU

 

Test on CIFAR-10 with 10 epochs

Average Training Time per epoch:150 sec

Total time: 25 min

Figure 08 – Test on CIFAR-10 with 10 epochs