In this tutorial you will learn about depth sensors and their usage. Also interpreting their data in different operating systems(Linux and Windows)using Openni will be discussed here. I am aware that there are standard tutorials given in openni website but I would like to give out a method which will make installation smooth.
Robot and a sensor are more like the two sides of a coin. A sensor is a device which constructs the bridge between a robot and the real world. A robot without a sensor may not be perform well in the real world. When robotics developed through ages there was a need to make a robot understand about the world ,so that it can perform operations in a relevant way.
Various sensors were started to use on machines and hence the sensor for vision was also developed started with cameras. Cameras ranging from normal webcam to HD cameras were mounted on robots to help them see the world more clearly. This led to the field called “Computer Vision”. Experts say that Computer Vision is a field which is revolutionizing robotics.
The data received through the normal camera was only a 2 dimensional data. Data here we are dealing are pixels. But with this pixel based data,we could only see where the desired object in a coordinate frame but not know how far away it is from us. Hence 3D cameras were developed,this could give the third coordinate of each pixel.
3D camera is a vision device which gives x,y,z coordinate of every pixel in a image. Once can also say this as “point clouds”.But we will talk about point clouds and potential application in future tutorials. One of the well known 3D camera is the Kinect from Microsoft. This link should give a basic perspective for what 3D cameras can be used for.
The 3D camera used in my tutorials will be Asus Xtion Pro. This product has the same capabilities of Kinect but it makes it more of a choice due to its small size and also it is USB powered only.
The software I am using here for the demo purpose would be openni(Open Natural Interaction).Its a open source software library designed for natural interaction with devices like asus,kinect and also audio recognition devices. There is an opensource library called Point Cloud Library(PCL). This is a library specifically designed for 3D data(point clouds)interpretation. We will be using that library in the following tutorials.
The following steps are to get Asus Xtion Pro working on Windows 7 or XP using Openni(Preferably windows 7). Please do as exactly as what I do in the following video. Even if you are working on a 64 bit computer it is advisable to download 32-bit version for problem free installation.
Video
So you have downloaded three files. Step 1: Run the file asus xtion firmware. Let the installation complete. Step 2: Next double click the file named “openni-win32-1.5.2” and complete the installation. Step 3: Finally double click the file named “nite-win32-1.5.2.21” and finish the installation.
You would have successfully installed the three downloaded msi files by now. Connect the asus xtion pro to the computer and the device would be recognized. Now go to C:\Program Files (x86)\OpenNI\Samples\Bin\Debug and open NiUserSelection.exe. Here is a video of what the program does.
Video