## arXiv is a repository of electronic preprints approved for publication

/0 Comments/in Senza categoria /by Matteo Testi## Manifold-based tools: ISOMAP algorithm

/0 Comments/in Senza categoria /by AndreaBacciu2018Author: Matteo Alberti

In this tutorial, we want to open a miniseries dedicated to Manifold-based dimensionality reductions tools. So let’s start by understanding what a Manifold is and when it is important without deepening the underlying mathematics.

*“Manifold is a mathematical space that on small scale resembles the Euclidean space of a specific dimension”*

So Why and When do we need Manifold?

The answer is simple: *When linear methods fail*

**Example: **

In linear methods (like PCA, MDS) data can only be summarized by a linear combination of features, this means that they cannot discover for example some 1D structures.

Let me show with this figure:

**Linear Methods **cannot discover more complex structures of data and reduce by it we will lose the geometry of data (and information)

**Manifold-Based **tools otherwise succeed

- ISOMAP
- Local Linear Embedding (LLE)

This because our goal is to “preserve local geometry”

The results in essentials in many applications (in computer vision it’s helpful for pose estimation, data compression, image de-noising and missing data interpolation)

**ISOMAP**

So let’s start with the first tools: Isometric feature mapping

This algorithm has **three main steps**:

- Determines which points are neighbors on Manifold based on distance (Euclidean distance) For each point, we connect all points within a fixed radius (where we have to choose radius) or like KNN (K nearest neighboring algorithm) we have to choose K number of neighbors.

All neighborhood relations are represented as a weighted graph

- Second step, we will go to estimates the geodesic distance between all pairs of points. But while we are working on Manifold the shortest distance is given by the shortest path in the graph (for example Djkstra’s algorithm also used in routing/navigation)
- In this last step, we will go to apply MDS (multidimensional scaling) to the matrix of graph distances due to constructing an embedding of the data in a d-dimensional space (Euclidean space) where we have preserved the manifold’s geometry. The coordinate vectors in the Euclidean-Space are chosen due to minimizing the cost function:

Where Dy is the Euclidean-based matrix of distances and L^{2} corresponds to the square of the sum of elements. **τ **is a function that converts distances to inner products (due to support efficient optimization)

The global minimum is achieved setting Euclidean-space’s coordinates to the top **d **eigenvectors of the matrix

Let me implement:

First of all we have to import packages from scikit-learn

import numpy as np import matplotlib.pyplot as plt from sklearn import manifold

Isomap Class:

OUR_DATA= manifold.Isomap(n_neighbors=N, n_components=C, eigen_solver=’auto’, tol=0, max_iter=None, path_method=’auto’, neighbors_algorithm=’auto’, n_jobs=1)

We want to describe main parameters:

- n_neighbors= must be an integer, is the number of neighbors (like k-NN)
- n_components= must be an integer, is the number of coordinators for manifold usually 2
- path_method= auto is the best choice, but you can also set with FW for Floyd-Warshall or D for ours Dijkstra’s algorithm

If we want to plot our result:

from time import time t0=time() ISOMAP_DATA= manifold.Isomap(n_neighbors, n_components=2).fit_transform(OUR_DATA) t1=time() print(“%s: %.2g sec” % (‘ISO’ t1-t0)) plot.embedding(ISOMAP_DATA, “ISOMAP obtain in ” % (t1-t0) plot.show()

Some considerations:

ISOMAP is a strong tool, but we have to do some considerations. While we can preserve local structure (local geometry of data) this is an expense of two factors:

- Sensitive to noise
- Few free parameters

This because the functionality of the algorithm depends almost on the choice of the radius. This means that just a few outliers can break the mapping.

For this reasons ISOMAP highly suggests when we just have an idea of the structure of the data even because results computationally expensive (less than Entropy-based models)

**References**

http://scikit-learn.org/stable/index.html (Official Website of Scikit libraries)

http://web.mit.edu/cocosci/Papers/sci_reprint.pdf (documentation about ISOMAP)

## Tensor Building

/0 Comments/in Senza categoria /by AndreaBacciu2018**Author:** Andrea Mercuri

The fundamental type of PyTorch is the Tensor just as in the other deep learning frameworks. By adopting tensors to express the operations of a neural network is useful for two a two-pronged purpose: both tensor calculus provides a very compact formalism and parallezing the GPU computation very easily.

Tensors are generally allocated into the Computer’s RAM and processed by the CPU or into the Graphic Card’s RAM processed by the GPU, this second format is called CUDA format. Below there is a list of all the tensor types supported by PyTorch.

Type |
CPU tensor |
GPU tensor |

32-bit floating point | torch.FloatTensor | torch.cuda.FloatTensor |

64-bit floating point | torch.DoubleTensor | torch.cuda.DoubleTensor |

16-bit floating point | torch.HalfTensor | torch.cuda.HalfTensor |

8-bit integer(unsigned) | torch.ByteTensor | torch.cuda.ByteTensor |

8-bit integer(signed) | torch.CharTensor | torch.cuda.CharTensor |

16-bit integer(signed) | torch.ShortTensor | torch.cuda.ShortTensor |

32-bit intero (signed) | torch.IntTensor | torch.cuda.IntTensor |

64-bit intero (signed) | torch.LongTensor | torch.cuda.LongTensor |

In order to use them, firstly we import PyTorch:

import torch

We can create an empty tensor by means of the constructor provided for each of the types listed above:

x = torch.FloatTensor(3, 2) print(x)

1.00000e-25 * &amp;amp;nbsp&amp;amp;nbsp9.9872 0.0000 &amp;amp;nbsp&amp;amp;nbsp9.9872 0.0000 &amp;amp;nbsp&amp;amp;nbsp0.0000 0.0000 &amp;amp;nbsp&amp;amp;nbsp[torch.FloatTensor of size 3x2]

This tensor is created in the main RAM. If we wish to create a tensor within the GPU, we need to use a CUDA type:

x = torch.cuda.FloatTensor(3, 2) print(x)

nan nan nan nan nan nan [torch.cuda.FloatTensor of size 3x2 (GPU 0)]

In this case, the tensor is created in the first GPU available. The GPUs on the computer are numbered by an an integer number starting from 0.

We can generate tensors from Python lists:

&amp;amp;amp;gt;torch.FloatTensor([[1,2,3],[4,5,6]])

1 2 3 4 5 6 [torch.FloatTensor of size 2x3]

or from numpy arrays:

x_np = np.array([1,2,3,4], dtype=np.float32) x = torch.FloatTensor(x_np)

We get the same result using **from_numpy** method:

x = torch.from_numpy(x_np)

It’s important to realize that the numpy array and the PyTorch tensor share the same content data. So If we modify one of them, the other will change as well:

x[0] = 0 print(x_np)

[ 0., 2., 3., 4.] print(x)

0

2

3

4

[torch.FloatTensor of size 4]

[/code]

We can create tensors from other tensors:

y = torch.FloatTensor(x) print(y)

0 2 3 4 [torch.FloatTensor of size 4]

Again, the new tensor shares the data with the original tensor.

We are able to create tensors of zeros:

torch.zeros(3,2)

0 0 0 0 0 0 [torch.FloatTensor of size 3x2]

Furthermore, we can build tensors made of pseudo-random numbers from a certain statistical distribution, for example, a uniform distribution on the [0,1] interval:

torch.rand(2, 3) 0.1256 0.0406 0.2072 0.2479 0.0515 0.093 [torch.FloatTensor of size 2x3]

Every tensor survives within the allocation space of the central memory or the video card the memory according to our willingness. Two tensors might be the operands of the same operation only under the constraint that they lay in the same memory. In this case the outcoming tensor also lives in the same memory space. Conversely, if we try to combine (for example by summing them) a tensor in the main RAM with a tensor in a video card (or two tensors in two different video-card) we fall into an exception:

xcpu = torch.FloatTensor(3,2) xgpu = torch.cuda.FloatTensor(3,2) xcpu + xgpu

TypeError Traceback (most recent call last) in () ----&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; 1 xcpu + xgpu …

In the case we intend to copy a tensor *X* onto the first GPU, we make use of the **cuda** method:

y = x.cuda(device=0)

If the tensor is already located in the first **GPU,** we get the original tensor back.

Instead, we can use the **CPU** method to get a copy of tensor *X* on the main RAM:

y = x.cpu()

We can convert a tensor to another type by passing that type as a parameter to the **type** method:

y = x.type(torch.ByteTensor)

We get the same result by calling a conversion method.

y = x.byte()

If we want to change the type and copy the tensor onto the **GPU** simultaneously, we have to pass a **CUDA** type to the **type** method:

y = x.type(torch.cuda.ByteTensor)

or we can write:

y = x.byte().cuda()

To change the status of the second GPU to “current” we need **set_device: **

torch.cuda.set_device(1)

Hence, if we write:

torch.cuda.current_device()

we have been returned 1, which means the second GPU is the current one (no longer the first one) by now. If we are going to call this **CUDA** method on a tensor, the system will return a copy of this tensor laying on the second GPU rather than the first one. By exploiting a context manager we are enabled to temporarily change the status of current GPU, as well.

For instance, if we write:

with torch.cuda.device(1): &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;nbsp&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;nbspx1 = torch.cuda.FloatTensor(2,3) x2 = torch.cuda.FloatTensor(2,3)

Where the initial current GPU is the one with 0 index, so X1 has been created on the second GPU (index 1), X2 on the first one (index 0).

All the mentioned functionalities related to the tensor creation come from the packages **torch**, **torch.cuda** and in the class **torch.Tensor**.

In the next tutorials, we will continue with the tensor exploration.

**References**

- http://pytorch.org/ (PyTorch official site)
- http://pytorch.org/docs/master/ (documentation)
- http://pytorch.org/tutorials/ (tutorials)
- https://github.com/pytorch (GitHub repository)