User Tools

Site Tools


vrkin

Getting Image and Depth Data from Kinect V2 into OpenVR

Last Date Modified: <04/23/18>

Motivation and Audience

This tutorial's motivation is to show readers how to use a Microsoft Kinect V2 to get image and depth data into the HTC Vive's HMD. Readers of this tutorial are assumed to have the following background and interests:
*Knowledge and experience with OpenVR API
*Knowledge and experience with ROS 1 & 2
*Knowledge and experience with CPP

The rest of this tutorial is presented as follows:

Parts Lists and Sources

These are the parts and their sources needed to complete this tutorial.

Part Name/Description Vendor Vendor URL Qty Notes
HTC Vive Kit Vive or Fry's https://www.vive.com/us/product/ or http://www.frys.com/product/8987273 1The kit will come with all the necessary hardware. 1 Vive HMD (head mounted display), 2 lighthouse base stations, 2 controllers, and the cables and power sources needed.
A computer with at least a GTX 1060 or equivalent Amazon https://www.amazon.com/1Please reference the recommended computer specs for any computer the reader plans to use. The lab however already has a computer specifically for research with the vive.
Tripod mounts for lighthouse base stations Amazon or build in-house https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=tripod+for+camera 2 Since the base stations have a standard camera mounting hole on the bottom any standard camera tripod should work. You can also use scrap pieces of 2×4 wood to create your own mount for them.
A computer with Ubuntu 16.04 Amazon https://www.amazon.com/1 The Alienware 17 laptop in the lab should is dual boot and should work well
Kinect for Xbox One Amazon link 1 You will also need to use a converter plate to mount it on HUBO and use a converter to interface with a computer
Kinect Adapter for Windows 10 Amazon link 1 You will need a usb 3.0 port to interface with the converter
Plate to Mount Kinect on HUBO 3D print No link 1 You will need to 3d print a converter to mount the Kinect on Hubo's upper body

Installing OpenCV and Kinect V2 package

First, you will need to install the following packages on the computer with Ubuntu 16.04 installed.



Make sure to do ROS tutorials 1~4





Note: Though the installation uses Ubuntu 14.04 the 16.04 version will work just fine

OpenCV Code

This tutorial will not go over the OpenCV code and how it works in detail. This tutorial will just go over how to integrate the different aspects. Please refer to the OpenCV tutorials for more information.
1. Go to your catkin workspace (catkin_ws) and create a new package to house the OpenCV code.
2. In the src directory make a main.cpp file and add the following code to it:

//! [headers]
#include <iostream>
#include <stdio.h>
#include <iomanip>
#include <time.h>
#include <signal.h>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
 
#include <libfreenect2/libfreenect2.hpp>
#include <libfreenect2/frame_listener_impl.h>
#include <libfreenect2/registration.h>
#include <libfreenect2/packet_pipeline.h>
#include <libfreenect2/logger.h>
 
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <geometry_msgs/TwistStamped.h>
 
//! [headers]
 
 
bool protonect_shutdown = false; // Whether the running application should shut down.
 
void sigint_handler(int s)
{
  protonect_shutdown = true;
}
 
void HSVOnly(int, int, int, int, int, int);
void Location();
 
cv::Mat imgCircles;
cv::Mat imgThresholded;
cv::Mat rgbd_flip;
cv::Mat test;
cv::Mat rgbmat_flip;
 
int iLowH = 22;
int iHighH = 34;
int iLowS = 38;
int iHighS = 244;
int iLowV = 160;
int iHighV = 255;
int posX;
int posY;
int posY_test;
int cenX;
int cenY;
float hand_x;
float hand_y;
float hand_z;
float dd;
int something = 0;
 
using namespace cv;
using namespace std;
 
void hand_fk(const geometry_msgs::TwistStamped::ConstPtr& ts)
{
 
	hand_x = 1000*ts->twist.linear.x;
	hand_y = 1000*ts->twist.linear.y;
	hand_z = 1000*ts->twist.linear.z;
 
 
}
 
 
int main(int argc, char** argv)
{
 
    ros::init(argc, argv, "image_converter");
    ros::NodeHandle nh;
    image_transport::ImageTransport it(nh);
    image_transport::Publisher pub = it.advertise("test/test", 1);
    image_transport::Publisher rgb_pub = it.advertise("test/rgb", 1);
    ros::Subscriber sub = nh.subscribe("/fk", 10, hand_fk);
 
 
    std::cout << "Streaming from Kinect One sensor!" << std::endl;
 
    //! [context]
    libfreenect2::Freenect2 freenect2;
    libfreenect2::Freenect2Device *dev = 0;
    libfreenect2::PacketPipeline *pipeline = 0;
    //! [context]
 
    //! [discovery]
    if(freenect2.enumerateDevices() == 0)
    {
        std::cout << "no device connected!" << std::endl;
        return -1;
    }
 
    string serial = freenect2.getDefaultDeviceSerialNumber();
 
    std::cout << "SERIAL: " << serial << std::endl;
 
    if(pipeline)
    {
        //! [open]
        dev = freenect2.openDevice(serial, pipeline);
        //! [open]
    } else {
        dev = freenect2.openDevice(serial);
    }
 
    if(dev == 0)
    {
        std::cout << "failure opening device!" << std::endl;
        return -1;
    }
 
    signal(SIGINT, sigint_handler);
    protonect_shutdown = false;
 
    //! [listeners]
    libfreenect2::SyncMultiFrameListener listener(libfreenect2::Frame::Color |
                                                  libfreenect2::Frame::Depth |
                                                  libfreenect2::Frame::Ir);
    libfreenect2::FrameMap frames;
 
    dev->setColorFrameListener(&listener);
    dev->setIrAndDepthFrameListener(&listener);
    //! [listeners]
 
    //! [start]
    dev->start();
 
    std::cout << "device serial: " << dev->getSerialNumber() << std::endl;
    std::cout << "device firmware: " << dev->getFirmwareVersion() << std::endl;
    //! [start]
 
    //! [registration setup]
    libfreenect2::Registration* registration = new libfreenect2::Registration(dev->getIrCameraParams(), dev->getColorCameraParams());
    libfreenect2::Frame undistorted(512, 424, 4), registered(512, 424, 4), depth2rgb(1920, 1080 + 2, 4);
    //! [registration setup]
 
    Mat rgbmat, depthmat, depthmatUndistorted, irmat, rgbd, rgbd2;
 
 
\
    //! [loop start]
    while(!protonect_shutdown)
    {
	ros::spinOnce();
 
        listener.waitForNewFrame(frames);
        libfreenect2::Frame *rgb = frames[libfreenect2::Frame::Color];
        libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
        libfreenect2::Frame *depth = frames[libfreenect2::Frame::Depth];
        //! [loop start]
 
	//! [converting libfreenect2 to opencv mat]
        cv::Mat(rgb->height, rgb->width, CV_8UC4, rgb->data).copyTo(rgbmat);
        cv::Mat(ir->height, ir->width, CV_32FC1, ir->data).copyTo(irmat);
        cv::Mat(depth->height, depth->width, CV_32FC1, depth->data).copyTo(depthmat);
	cv::flip(depthmat,test,1);
	cv::flip(rgbmat,rgbmat_flip,1);
	//! [converting libfreenect2 to opencv mat]
 
        //! [registration]
        registration->apply(rgb, depth, &undistorted, &registered, true, &depth2rgb);
        //! [registration]
 
	//! [converting libfreenect2 to opencv mat]
        cv::Mat(registered.height, registered.width, CV_8UC4, registered.data).copyTo(rgbd);
	cv::flip(rgbd,rgbd_flip, 1);
	//! [converting libfreenect2 to opencv mat]
 
	//! [finding yellow objects]
        HSVOnly(iLowH, iHighH, iLowS, iHighS, iLowV, iHighV);
        Location();
	//! [finding yellow objects]
 
	int posY_comp = posY - 150; 
 
	if(something <= 10)
	{
	posY_test = posY;
	}	
 
 
 
 
 
 
while(something < 1)
    {
	dd = test.at<float>(Point(posX,posY_test));
	printf("x: %i, y: %i, d: %f\n",posX, posY_test, dd);
 
	if(something == 0 && dd != 0 && dd < 10000)
	{
		posY_test = posY_test - 1;
 
	}
	if(something == 0 && dd == 0)
	{
		something = 1;
		posY_test = posY_test + 7;
	}
	if(something == 1 && dd == 0)
	{
		posY_test = posY_test - 1;
	}
	if(something == 1 && dd != 0 )
	{
		something = 2;
	}
	if(something == 2 && dd < 10000 && dd != 0)
	{
		posY_test = posY_test - 1;
 
	}
	if(something == 2 && dd == 0)
	{
		something = 3;
	}
	if(something == 3 && dd == 0)
	{
		something = 11;
		posY_test = posY_test + 4;
		dd = test.at<float>(Point(posX,posY_test));
		printf("x: %i, y: %i, d: %f\n",posX, posY_test, dd);
	}
	if(dd > 10000)
	{
		something = 11;
	}
 
 
    }
 
	//! [adding distance data to image]
	dd = test.at<float>(Point(posX,posY_test));
	float dd_comp = dd+160;
	float diff = dd_comp - hand_x;
	cv::Point org;
	cv::Point rgb_org;
        org.x = posX-100;
        org.y = posY_comp-50;
	rgb_org.x = 3.5*posX;
	rgb_org.y = posY_test;
        char str[200];
 
        sprintf(str, "%.1fmm",diff);
        cv::putText( rgbd_flip, str, org, 0, 1, Scalar(255,255,0), 2, 8);
        cv::putText( rgbmat_flip, str, rgb_org, 0, 1, Scalar(255,255,0), 2, 8);
	//! [adding distance data to image]
 
 
	sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgra8", rgbd_flip).toImageMsg();
	sensor_msgs::ImagePtr rgb_msg = cv_bridge::CvImage(std_msgs::Header(), "bgra8", rgbmat_flip).toImageMsg();
        pub.publish(msg);
        rgb_pub.publish(rgb_msg);
 
	 if (posX >= 0 && posY >= 0)
        {
	    cenX = 2*posX;
	    cenY = 2*posY_test;
	    //cenY = 2*posY_comp;
            //Draw a red circle encompassing object
            circle(rgbd_flip, Point(cenX, cenY), 25, Scalar(0, 0, 255), 1, 8, 1);
            circle(rgbd_flip, Point(cenX, cenY), 26, Scalar(0, 0, 255), 1, 8, 1);
            circle(rgbd_flip, Point(cenX, cenY), 27, Scalar(0, 0, 255), 1, 8, 1);
            circle(rgbd_flip, Point(cenX, cenY), 24, Scalar(0, 0, 255), 1, 8, 1);
            circle(rgbd_flip, Point(cenX, cenY), 23, Scalar(0, 0, 255), 1, 8, 1);
 
        }
 
	cv::imshow("rgb", rgbmat_flip);
 
        cv::imshow("registered", rgbd_flip);
	cv::imshow("threshold" , imgThresholded);
 
	something = 0;
 
 
        int key = cv::waitKey(1);
        protonect_shutdown = protonect_shutdown || (key > 0 && ((key & 0xFF) == 27)); // shutdown on escape
 
    //! [loop end]
        listener.release(frames);
    }
    //! [loop end]
 
    //! [stop]
    dev->stop();
    dev->close();
    //! [stop]
 
    delete registration;
 
    std::cout << "Streaming Ends!" << std::endl;
    return 0;
}
 
 
void HSVOnly(int iLowH, int iHighH, int iLowS, int iHighS, int iLowV, int iHighV)
{
    cv::Mat imgHSV;
 
    //Convert the captured frame from BGR to HSV
    cvtColor(rgbd_flip, imgHSV, COLOR_BGR2HSV);
 
    //Threshold the image
    inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded);
 
    //morphological opening (remove small objects from the foreground)
    erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
    dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
 
}
 
 
void Location()
{
    imgCircles = Mat::zeros( rgbd_flip.size(), CV_8UC4 );
    Moments oMoments = moments(imgThresholded);
 
    double dM01 = oMoments.m01;
    double dM10 = oMoments.m10;
    double dArea = oMoments.m00;
 
    if (dArea > 10000)
    {
        //calculate the position of the ball
        posX = dM10 / dArea;
        posY = dM01 / dArea;
 
    }
 
 
}

This code will take the data from the kinect and find the distance between the sensor and a yellow object. Then taking the FK data from Hubo's right hand will display the distance between the yellow object and Hubo's right hand above the detected object. How to run the code will be explained later on.
Then make a CMakeLists.txt file in the package and add the following code into it:

cmake_minimum_required(VERSION 2.8.3)
project(kinect2hubo)
 
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DBT_USE_DOUBLE_PRECISION -Wall")
# Unused warnings
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wuninitialized -Winit-self -Wunused-function -Wunused-label -Wunused-variable -Wunused-but-set-variable -Wunused-but-set-parameter")
# Additional warnings
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Warray-bounds -Wtype-limits -Wreturn-type -Wsequence-point -Wparentheses -Wmissing-braces -Wchar-subscripts -Wswitch -Wwrite-strings -Wenum-compare -Wempty-body -Wlogical-op")
 
# Check for c++11 support
INCLUDE(CheckCXXCompilerFlag)
CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
IF(COMPILER_SUPPORTS_CXX11)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
ELSEIF(COMPILER_SUPPORTS_CXX0X)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")
ELSE()
  MESSAGE(ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
ENDIF()
 
find_package(catkin REQUIRED COMPONENTS
  cv_bridge
  geometry_msgs
  message_runtime
  roscpp
  sensor_msgs
  std_msgs
  image_transport
)
find_package(freenect2 REQUIRED HINTS "$ENV{HOME}/freenect2") #for interfacing with kinect v2
find_package(OpenCV REQUIRED)
 
include_directories(
  ${catkin_INCLUDE_DIRS}
  ${OpenCV_INCLUDE_DIRS}
  ${freenect2_INCLUDE_DIRS}
)
 
add_executable(${PROJECT_NAME}_node src/main.cpp)
 
 target_link_libraries(${PROJECT_NAME}_node
   ${catkin_LIBRARIES}
   ${OpenCV_LIBRARIES}
   ${freenect2_LIBRARY}
 )

Then make a package.xml file and add the following code:

<?xml version="1.0"?>
<package format="2">
  <name>kinect2hubo</name>
  <version>0.0.0</version>
  <description>The kinect2hubo package</description>
 
  <maintainer email="[email protected]">Keitaro Nishimura</maintainer>
 
  <license>TODO</license>
 
  <buildtool_depend>catkin</buildtool_depend>
  <build_depend>cv_bridge</build_depend>
  <build_depend>geometry_msgs</build_depend>
  <build_depend>roscpp</build_depend>
  <build_depend>sensor_msgs</build_depend>
  <build_depend>std_msgs</build_depend>
  <build_depend>image_transport</build_depend>
  <build_export_depend>cv_bridge</build_export_depend>
  <build_export_depend>geometry_msgs</build_export_depend>
  <build_export_depend>roscpp</build_export_depend>
  <build_export_depend>sensor_msgs</build_export_depend>
  <build_export_depend>std_msgs</build_export_depend>
  <build_export_depend>image_transport</build_export_depend>
  <exec_depend>cv_bridge</exec_depend>
  <exec_depend>geometry_msgs</exec_depend>
  <exec_depend>message_runtime</exec_depend>
  <exec_depend>roscpp</exec_depend>
  <exec_depend>sensor_msgs</exec_depend>
  <exec_depend>std_msgs</exec_depend>
  <exec_depend>image_transport</exec_depend>
 
</package>

Then you should be ready to make the package using catkin_make.

Importing Video Feed into HMD


To insert the video feed into the HMD we will use the UniversalVROverlay package on the windows 10 computer running the HTC Vive. Please download the source code from github and build it to create the exe file.

The following steps depict how to stream the video feed from the OpenCV code into the the Vive HMD.

1. Connect all computers by router

2. Run the iai kinect code

3. Run the Opencv code

4. Run the web video server

5. View the stream in the Vive pc through the web explorer

6. Use the UniversalVRoverlay to input the stream into the HMD

vrkin.txt · Last modified: 2018/04/24 11:26 by keitaronishimura