User Tools

Site Tools


ardrone_line_transform

Detecting Lines Using Hough Line Transform

Introduction

This tutorial we will learn how to detect lines in an image and also the application of detecting lines in an image. Line detection is one of the most basic algorithms in computer vision technology. It has variety of applications like pattern recognition,object comparison,Lane detection,SLAM… line detection is involved in the basic level in all of the algorithms mentioned above.

Types of Line detection

OpenCv offers two types of line detection

1) Progressive Probabilistic hough transform(PPHT) 2) Standard hough transform.(SHT) 3) Multi Scale Hough Transform.

Hough Line Transform

The idea involved in this mechanism is that any point in a binary image could be part of some set of possible lines. Each line is parameterized by a slope 'a' and an intercept 'b' and then a point in the original image is transformed to a locus of a points in the (a,b) plane corresponding to all lines passing through that point. If we convert every non-zero pixel in the input image into such a set of points in the output image and sum over all such contributions,then lines that appear in the input image will appear as local maxima in the output image. Since each point's contributions is being summed the (a.b) plane is called accumulator plane. This is pretty much the concept the involved in Standard Hough Transform.

Probabilistic hough transform

This method is a variation of the standard hough transform. It takes in the extent of the lines than just the orientation of the lines. The reason its called Probabilistic is that accumulates only a fraction of the points in the accumulator plane and not all of them.(source:Learning OpenCv).

Trying out Probabilistic Hough transform through the drone's eye

1. Go to folder named “src” in the package “roscv”(which you would have created in the previous tutorials). 2. Create a file called “plines.cpp”. 3. Paste the following code in it and save the file.

plines.cpp
#include <ros/ros.h>
#include <stdio.h>
#include <iostream>
#include "std_msgs/String.h"
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>     //make sure to include the relevant headerfiles
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <cv_bridge/CvBridge.h>
#include <cvaux.h>
#include<math.h>
#include <cxcore.h>
#include "turtlesim/Velocity.h"
 
/*here is a simple program which demonstrates the use of ros and opencv to do image manipulations on video streams given out as image topics from the monocular vision
of robots,here the device used is a ardrone(quad-rotor).*/
using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;
 
static const char WINDOW[] = "Image window";
 
float prevVelocity_angular ,prevVelocity_linear ,newVelocity_angular ,newVelocity_linear ;
float derive_angular, derive_linear, dt = 0.5;
float horizontalcount;
 
class ImageConverter
{
  ros::NodeHandle nh_;
  ros::NodeHandle n;
 ros::Publisher pub ;
  ros::Publisher tog;
  image_transport::ImageTransport it_;    
  image_transport::Subscriber image_sub_; //image subscriber 
  image_transport::Publisher image_pub_; //image publisher(we subscribe to ardrone image_raw)
  std_msgs::String msg;
public:
  ImageConverter()
    : it_(nh_)
  {
 
 
      image_sub_ = it_.subscribe("/ardrone/image_raw", 1, &ImageConverter::imageCb, this);
      image_pub_= it_.advertise("/arcv/Image",1);
 
 
  }
 
  ~ImageConverter()
  {
    cv::destroyWindow(WINDOW);
  }
 
  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {
     int i;
     sensor_msgs::CvBridge bridge;//we need this object bridge for facilitating conversion from ros-img to opencv
   IplImage* img = bridge.imgMsgToCv(msg,"rgb8");  //image being converted from ros to opencv using cvbridge
 
 
 
 
       CvMemStorage* storage = cvCreateMemStorage(0);
 
        CvSeq* lines = 0;
 
 
 
        IplImage* out2 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3);  
        IplImage* gray_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); 
        IplImage* canny_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
        IplImage* gray_out1=cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
	IplImage* canny_out1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
	IplImage* canny_out2 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
 
 
 
      cvCvtColor(img , gray_out, CV_RGB2GRAY);
        cvCanny( gray_out, canny_out, 50, 125, 3 );
      cvCvtColor(canny_out ,gray_out1, CV_GRAY2BGR);
       lines = cvHoughLines2( canny_out, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI/180, 80,50, 10 );
        for( i = 0; i < lines->total; i++ )
        {    
 
             CvPoint* line = (CvPoint*)cvGetSeqElem(lines,i);
             cvLine( img, line[0], line[1], CV_RGB(0,255,0), 1, 8 );
 
       } 
 
      cvShowImage( "Detected lines printed", canny_out);//lines projected onto the real picture
      cvShowImage( "canny edge output", gray_out);
      cvShowImage( "GRAY_OUT1", img );
 
      cv::waitKey(3);
   sensor_msgs::ImagePtr out = sensor_msgs::CvBridge::cvToImgMsg(img, "rgb8");//image converted from opencv to ros for publishing
   image_pub_.publish(out);
 cvClearMemStorage(storage);
cvReleaseMemStorage(&storage);
}
 
};
 
int main(int argc, char** argv)
{
  ros::init(argc, argv, "image_converter");
  ImageConverter ic;
  ros::spin();
  return 0;
}

Make changes in the CMakeLists.txt

Step 1: Open up the file named “CMakeLists.txt” and search for the lines beginning with “rosbuild_add_executable”. In that add the file you created which is “plines.cpp”. Save and close the file. Step 2: Open up a terminal and enter the commands one after the other:

roscd roscv
cmake .
rosmake

Test the Code

1.Make sure you are connected to the ardrone. 2. Enter the following commands in a separate terminal:

roscore
rosrun ardrone_brown ardrone_driver
rosrun roscv roscv

here is a picture of line detection using probabilistic hough transform.<br>


Standard Hough Transform

Like wise lets implement the standard hough line transform.

1.Go to folder named “src” in the package “roscv”(which you would have created in the previous tutorials). 2.Create a file called “slines.cpp”. 3.Paste the following code in it and save the file.

slines.cpp
#include <ros/ros.h>
#include <stdio.h>
#include <iostream>
#include "std_msgs/String.h"
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>     //make sure to include the relevant headerfiles
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <cv_bridge/CvBridge.h>
#include <cvaux.h>
#include<math.h>
#include <cxcore.h>
#include "turtlesim/Velocity.h"
 
/*here is a simple program which demonstrates the use of ros and opencv to do image manipulations on video streams given out as image topics from the monocular vision
of robots,here the device used is a ardrone(quad-rotor).*/
using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;
 
 
static const char WINDOW[] = "Image window";
 
class ImageConverter
{
  ros::NodeHandle nh_;
  ros::NodeHandle n;
 ros::Publisher pub ;
  ros::Publisher tog;
 
  image_transport::ImageTransport it_;    
  image_transport::Subscriber image_sub_; //image subscriber 
  image_transport::Publisher image_pub_; //image publisher(we subscribe to ardrone image_raw)
  std_msgs::String msg;
public:
  ImageConverter()
    : it_(nh_)
  {
 
 
      pub= n.advertise<turtlesim::Velocity>("/drocanny/vanishing_points", 500);//
      image_sub_ = it_.subscribe("/ardrone/image_raw", 1, &ImageConverter::imageCb, this);
      image_pub_= it_.advertise("/arcv/Image",1);
 
 
  }
 
  ~ImageConverter()
  {
    cv::destroyWindow(WINDOW);
  }
 
  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {
 
     sensor_msgs::CvBridge bridge;//we need this object bridge for facilitating conversion from ros-img to opencv
   IplImage* img = bridge.imgMsgToCv(msg,"rgb8");  //image being converted from ros to opencv using cvbridge
  // cvCircle( img,cvPoint(50, 50), 10, CV_RGB(255,200,0)); //drawing circle on the video
   turtlesim::Velocity velMsg;
 
 
 
       CvMemStorage* storage = cvCreateMemStorage(0);
 
        CvSeq* lines = 0;
       int i,c,d;
 
 
        IplImage* out1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );   //make sure to feed the image(img) data to the parameters necessary for canny edge output 
        IplImage* gray_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); 
        IplImage* canny_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
        IplImage* gray_out1=cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
 
        cvSmooth( img, out1, CV_GAUSSIAN, 11, 11 );
         cvCvtColor(img , gray_out, CV_RGB2GRAY);
        cvCanny( gray_out, canny_out,50, 125, 3 );
      cvCvtColor(canny_out ,gray_out1, CV_GRAY2BGR);
 
 
     lines = cvHoughLines2( canny_out, storage, CV_HOUGH_STANDARD, 1, CV_PI/180, 80, 0, 0 );  
 
   for( i = 0; i < MIN(lines->total,100); i++ ) 
        {
             d=0;
             c=0;
            float* line = (float*)cvGetSeqElem(lines,i);
            float rho = line[0];
            float theta = line[1];
            CvPoint pt1, pt2;
            double a = cos(theta), b = sin(theta);
            double x0 = a*rho, y0 = b*rho;
            pt1.x = cvRound(x0 + 1000*(-b));
            pt1.y = cvRound(y0 + 1000*(a));
            pt2.x = cvRound(x0 - 1000*(-b));
            pt2.y = cvRound(y0 - 1000*(a));
cvLine(img, pt1, pt2, CV_RGB(255,0,0), 1, 8 );
 
 
}                  
   cv::waitKey(2);
   cvShowImage( "Detected lines printed", canny_out);
      cvShowImage( "canny edge output", gray_out);
      cvShowImage( "img", img );
 
 
}
 
};
 
int main(int argc, char** argv)
{
  ros::init(argc, argv, "image_converter");
  ImageConverter ic;
  ros::spin();
  return 0;
}

Make changes in the CMakeLists.txt

Step 1: Open up the file named “CMakeLists.txt” and search for the lines beginning with “rosbuild_add_executable”. In that add the file you created which is “slines.cpp”. Save and close the file.

Test the Code

1.Make sure you are connected to the ardrone. 2. Enter the following commands in a separate terminal:

roscore
rosrun ardrone_brown ardrone_driver
rosrun roscv roscv

here is a picture of line detection using probabilistic hough transform.<br>


Conclusion

In this tutorial we wrote the code for standard hough transform as well as probabilistic hough transform. But one has to know ,how to make use of the lines extracted ,in the next two tutorials I have designed basic navigation algorithm based on probabilistic hough transform and standard hough transform.This shows the application of hough line detection.

ardrone_line_transform.txt · Last modified: 2016/11/06 02:33 by dwallace