# 3D Point Clouds

Information about our spatial environments is used in a large number of applications. When it comes to the gathering of 3D shape information point clouds are a popular way to represent data. Point clouds can be generated by dense image matching (DIM), laser scanners or infrared sensors like the microsoft kinect. Although rich information can be derived from 3D point clouds, that information can be hard to handle. In this context we need to work with a large amount of data, varying properties(i.e. density) and unclear spatial relations between different datasets. Here we learn how to handle point clouds and how to extract information from them.

## Preprocessing

[notebook] **Rotation of 3D Points**

The main goal of this exercise is to learn how to rotate 3d (laser scan) points.

[notebook] **Similarity Transformation estimation**

In this exercise the main topic is the estimation of an transformation between two point clouds.

[notebook] **Iterative Clostest Point (ICP)**

Estimation of transformation parameters without known correspondences. We are given two sets of points left and right and our goal is to estimate correspondences between the points and the transformation between the sets.

[notebook] **Point Cloud to Depth Image**

A significant disadvantage of point clouds is the unclear neighborhood relation between points. One way of recovering the neighborhood information is to project a 3D scene to a 2D raster.

## Segmentation

[notebook] **Principal Component Analysis (PCA)**

Principal Component Analysis (PCA) provides usefull information about the point cloud.

[notebook] **Random Sample Consensus (RANSAC)**

RANSAC example for plane parameter estimation.

[notebook] **RANSAC Optimization & Depth Image to Point Cloud**

Extension of the RANSAC example for plane parameter estimation and depth image to 3d points conversion.

## Classification

[notebook] **Random Forest**

Random Forests use several decision trees to classify objects based on specific features. In this exercise you should determine 3D features for every pixel of a depth image generated by a Microsoft Kinect v2.

[slides] **Deep Learning With Point Clouds**

## Practical

If you search for an easy way to work the task on your machine, the easiest solution is to use our 3D point environment based on docker image. Here is a tutorial for docker newbies.