# Point cloud projection

This page is available for practice as an interactive jupyter notebook. A significant disadvantage of point clouds is the unclear neighborhood relation between points. One way of recovering the neighborhood information is to project a 3D scene to a 2D raster.

## 2d to 3d

We are given $cx$, $cy$ the coordinates of the image center and $f$ the focal length. Based on geometry of the sensor sketched below we can compute the $x_w,y_w,z_w \in \mathbb{R}$ real world coordinates from $x_b,y_b \in \mathbb{N}$ from the discrete depth image coordinates and $d \in \mathbb{R}$ distance measured for a certain pixel $x_b,y_b$.

$\LARGE y_w=d$, $\LARGE x_w=\frac{(x_b-cx)y_w}{f}$ and $\LARGE z_w=\frac{(y_b-cy)y_w}{f}$ We already solved the problem in other direction 2D to 3D earlier. Let’s use this equation to develop solution for problem in the opposite direction 3D to 2D.

1. Determine the equation for the projection
2. Implement the projection ‘world_to_depth’.
3. Load ‘depthImage.png’ and transform it into the 3D cloud then transform it back to 2D raster.
4. Generate a raster which shows the scene from a slightly different angle, like 20° different.

import numpy as np
from math import cos, sin, pi

def depth_to_world(f, cx, cy, img):
""" Converts depth raster to 3D coordinates
f: Focal distance
(cx, cy): center point
image: 2D Raster with depth values stored at each cell

return: numpy array of 3d points
"""

points = []
for index, d in np.ndenumerate(img):
y_w = d
x_w = ((float(index) - cx)*y_w)/f
z_w = -((float(index) - cy)*y_w)/f

points.append(np.array([x_w, y_w, z_w]))

points = np.asarray(points)

return points

def world_to_depth(f, cx, cy, points):
""" Converts depth raster to 3D coordinates
f: Focal distance
(cx, cy): center point
points: 2D Raster with depth values stored at each cell numpy array of 3d points

return: 2D Raster(numpy) with depth values stored at each cell
"""

img = imread('depthImage.png')

from pylab import cm,imshow