User Tools

Site Tools


ardrone_obstacle_avoidance

Obstacle Avoidance Using Vision

Introduction

This tutorial we will trying out a different methodology to avoid obstacles and as well to continue navigation along a given pathway. The work done is a research work and not a solid end product,But this projects a very good base for autonomous navigation of quad-rotors.

Obstacle in Unmanned aerial vehicles

Obstacle or collision avoidance in aerial vehicles is a vital factor. A slight disturbance is more than enough to derail the vehicle off its course. There are many ways in the robotics world to avoid obstacle and continue with the course. I have proposed a very simple method which could be altered and further enhanced for future works.

Idea involved

To detect obstacles using vision primarily is a challenging task but greatly reduces the need of extra sensors. The need is to have a notification once we are near the wall. The notification in my approach is a circle projected on to the wall using a flashlight. Here are the pictures of the flash lights I used.

f1.jpg

f2.jpg

Here is a picture of the ardrone with the flashlights on it.

f3.jpg
f4.jpg

The reason for the other flashlight is for the balancing issues ,placing a flash light on the front alone will disturb the drone's balance greatly. Also an other important note is that the flash light I have used weighs exactly 7.8 gm.

Algorithm involved

In this experiment,I have implemented line following along with obstacle avoidance,which means I have used both bottom and front camera alternatively for my vision. I have a given a tutorial on toggling between cameras please use this link for following this tutorial.

I switched the drone's view for every 0.25 seconds. Whenever the drone is in the bottom camera mode it does line following and when its in the front camera it does circle detection. The drone simply does not assume for obstacles in case a circle is detected. The circle should be detected in the prescribed (x,y) domain. The threshold values of the hough circle detection is set to identify the circles(obstacles) within 4 to 5 ft.

Creating the circle detection package

1. Open up a terminal and enter the command one after the other :

roscreate-pkg obsdet sensor_msgs opencv2 cv_bridge roscpp std_msgs image_transport turtlesim ardrone_brown std_srvs
roscd obsdet

Go to the src folder and create a file called obsdet.cpp

2.Paste the following code in the created file and save the file.

obsdet.cpp
#include <ros/ros.h>
#include <stdio.h>
#include <iostream>
#include "std_msgs/String.h"
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>     //make sure to include the relevant headerfiles
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <cv_bridge/CvBridge.h>
#include <cvaux.h>
#include<math.h>
#include <cxcore.h>
#include "turtlesim/Velocity.h"
#include <turtlesim/Pose.h>
#include <highgui.h>
 
/*here is a simple program which demonstrates the use of ros and opencv to do image manipulations on video streams given out as image topics from the monocular vision
of robots,here the device used is a ardrone(quad-rotor).*/
 
using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;
static double r=0,g=0,b=0;
static const char WINDOW[] = "Image window";
 
class walldetect
{
  ros::NodeHandle nh_;
  ros::NodeHandle n;
 ros::Publisher pub ;
 
  image_transport::ImageTransport it_;    
  image_transport::Subscriber image_sub_; //image subscriber 
  image_transport::Publisher image_pub_; //image publisher(we subscribe to ardrone image_raw)
  std_msgs::String msg;
public:
 walldetect()
    : it_(nh_)
  {
 
      pub= n.advertise<turtlesim::Pose>("/drone/walldis", 500);
      image_sub_ = it_.subscribe("/ardrone/image_raw", 1, &walldetect::imageCb, this);
     image_pub_= it_.advertise("/arcv/Image",1);
 
 
  }
 
  ~walldetect()
  {
    cv::destroyWindow(WINDOW);
  }
 
  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {
 
     sensor_msgs::CvBridge bridge;//we need this object bridge for facilitating conversion from ros-img to opencv
 
 
	IplImage* img = bridge.imgMsgToCv(msg,"bgr8");  //image being converted from ros to opencv using cvbridge
 
 
 
 
 
turtlesim::Pose dist;
	int i1,j1,i2,j2;
        CvSeq* lines = 0;
       int i,c,d;
      int j;
       float c1[50]; 
       float m;
       float dis;
       int k=0,k1=0; 
      static int count=0;  
     float m1[50];
      float xv;
      float yv;
      int vc=0;
     float xvan=0,yvan=0;
      static float xvan1=0,yvan1=0;
    float num=0;
   static float count1=0;
  float dv;
float vxx,vyy;
 
 
      IplImage* out1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );  
      IplImage* gray_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); 
      IplImage* canny_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
      IplImage* gray_out1=cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
      IplImage* img1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 ); 
 
 
 
 
 CvMemStorage* storage = cvCreateMemStorage(0);
   cvCvtColor(img, gray_out, CV_BGR2GRAY);
    cvSmooth(gray_out, gray_out, CV_GAUSSIAN, 9, 9); 
 
 
 
	int nFrames = 50;
 
 
 
CvSeq* circles = cvHoughCircles(gray_out, storage, 
        CV_HOUGH_GRADIENT, 2, 1200,1,1 ,0,80);
 
for (i = 0; i < circles->total; i++) 
    {
 
         float* p = (float*)cvGetSeqElem( circles, i );
	printf("\n%d,%d",cvRound(p[0]),cvRound(p[1]));
	if(cvRound(p[0])>=140 && cvRound(p[0])<=160)
		{
		if(cvRound(p[1])>=90 && cvRound(p[1])<=110)
			{
			printf("\nCIRCLE!!!!!!!!!!!FOUND!!!!!!!!!!!!!!!!!");
			dist.x =1;
	 		pub.publish(dist);
			}
		}	
       else
                {
                  dist.x =0;
	 		pub.publish(dist);
                  }
         cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 
             3, CV_RGB(0,255,0), -1, 8, 0 );
	cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 
             3, CV_RGB(0,255,0), 3, 8, 0 );
 
    }
 
 
cvShowImage( "img",img);
cvShowImage( "grayall",gray_out);
  cvWaitKey(2);   
 
}
};
 
 
 
 
int main(int argc, char** argv)
{
  ros::init(argc, argv, "walldetect");
  walldetect ic;
  ros::spin();
 
  return 0;
}

Make changes in the CMakeLists.txt

Step 1: Open up the file named “CMakeLists.txt” and search for the lines beginning with “rosbuild_add_executable”. In that add the file you created which is “obsdet.cpp”. Save and close the file.

Compile to make the driver file

Open up a terminal and enter the commands one after the other:

roscd obsdet

cmake .
rosmake

Make sure you get “0” failures while making the package.

Add the line following module

In this part you would be adding the line following module.

1. Open up a terminal and enter:

roscreate-pkg linedet sensor_msgs opencv2 cv_bridge roscpp std_msgs image_transport turtlesim ardrone_brown std_srvs
roscd linedet

Go to the src folder and create a file called linedet.cpp

2.Paste the following code in the created file and save the file.

linedet.cpp
#include <ros/ros.h>
#include <stdio.h>
#include <iostream>
#include "std_msgs/String.h"
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>     //make sure to include the relevant headerfiles
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <cv_bridge/CvBridge.h>
#include <cvaux.h>
#include<math.h>
#include <cxcore.h>
#include "turtlesim/Velocity.h"
 
/*here is a simple program which demonstrates the use of ros and opencv to do image manipulations on video streams given out as image topics from the monocular vision
of robots,here the device used is a ardrone(quad-rotor).*/
using namespace std;
using namespace cv;
namespace enc = sensor_msgs::image_encodings;
static const char WINDOW[] = "Image window";
 
float prevVelocity_angular ,prevVelocity_linear ,newVelocity_angular ,newVelocity_linear ;
float derive_angular, derive_linear, dt = 0.5;
float horizontalcount;
 
class ImageConverter
{
  ros::NodeHandle nh_;
  ros::NodeHandle n;
 ros::Publisher pub ;
  ros::Publisher tog;
  image_transport::ImageTransport it_;    
  image_transport::Subscriber image_sub_; //image subscriber 
  image_transport::Publisher image_pub_; //image publisher(we subscribe to ardrone image_raw)
  std_msgs::String msg;
public:
  ImageConverter()
    : it_(nh_)
  {
      pub= n.advertise<turtlesim::Velocity>("/drocanny/vanishing_points", 500);//
      image_sub_ = it_.subscribe("/ardrone/image_raw", 1, &ImageConverter::imageCb, this);
      image_pub_= it_.advertise("/arcv/Image",1);    
  }
 
  ~ImageConverter()
  {
    cv::destroyWindow(WINDOW);
  }
 
  void imageCb(const sensor_msgs::ImageConstPtr& msg)
  {
 
     sensor_msgs::CvBridge bridge;//we need this object bridge for facilitating conversion from ros-img to opencv
   IplImage* img = bridge.imgMsgToCv(msg,"rgb8");  //image being converted from ros to opencv using cvbridge
   turtlesim::Velocity velMsg;
 CvMemStorage* storage = cvCreateMemStorage(0);
     CvSeq* lines = 0;
       int i,c,d;
       float c1[50]; 
       float m,angle;
	float buf;
	float m1;
       float dis;
       int k=0,k1=0; 
      int count=0;  
 
      float xv;
      float yv;
      int vc=0;
     float xvan=0,yvan=0;
      static float xvan1=0,yvan1=0;
    float num=0;
   static float count1=0;
  float dv;
float vxx,vyy;
 
         cvSetImageROI(img, cvRect(0, 0,170, 140));
        IplImage* out1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );   //make sure to feed the image(img) data to the parameters necessary for canny edge output 
        IplImage* gray_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); 
        IplImage* canny_out = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
        IplImage* gray_out1=cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
	IplImage* canny_out1 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
	IplImage* canny_out2 = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 );
 
        cvSmooth( img, out1, CV_GAUSSIAN, 11, 11 );
 
      cvCvtColor(out1 , gray_out, CV_RGB2GRAY);
        cvCanny( gray_out, canny_out, 50, 125, 3 );
      cvCvtColor(canny_out ,gray_out1, CV_GRAY2BGR);
 
 
 
       lines = cvHoughLines2( canny_out, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI/180, 80,50, 10 );
        for( i = 0; i < lines->total; i++ )
        {    
 
             CvPoint* line = (CvPoint*)cvGetSeqElem(lines,i);
{           
{
cvLine( out1, line[0], line[1], CV_RGB(0,255,0), 1, 8 );
cvLine( gray_out1, line[0], line[1], CV_RGB(0,255,0), 2, 8 );
xv=line[0].x-line[1].x;
yv=line[0].y-line[1].y;
velMsg.linear = atan2(xv,yv)*180 /3.14159265;
angle=velMsg.linear;
if(velMsg.linear<-90)
{
  velMsg.linear=velMsg.linear+180;
}
buf=(line[0].x+line[1].x)/2;
 
if(abs(85-buf)<=15)
{
velMsg.angular =0;
}
else
{
velMsg.angular =(85-(line[0].x+line[1].x)/2);
}
 
if(abs(velMsg.angular)>50)
{
velMsg.angular =0;
}
 
 
 
 
 
printf("\nX::Y::X1::Y1::%d:%d:%d:%d",line[0].x,line[0].y,line[1].x,line[1].y);
 
pub.publish(velMsg);
 
 
} 
 
 
 
}
 
cvSeqRemove(lines,i);
 
}
        cvShowImage( "OUT1", out1 );//lines projected onto the real picture
       cvShowImage( "GRAY_OUT1", gray_out1 );
       cv::waitKey(3);
   sensor_msgs::ImagePtr out = sensor_msgs::CvBridge::cvToImgMsg(img, "rgb8");//image converted from opencv to ros for publishing
   image_pub_.publish(out);
 cvClearMemStorage(storage);
cvReleaseMemStorage(&storage);
}
 
};
 
int main(int argc, char** argv)
{
  ros::init(argc, argv, "image_converter");
  ImageConverter ic;
  ros::spin();
  return 0;
}

Make changes in the CMakeLists.txt

Step 1: Open up the file named “CMakeLists.txt” and search for the lines beginning with “rosbuild_add_executable”. In that add the file you created which is “linedet.cpp”. Save and close the file.

Compile to make the driver file

Open up a terminal and enter the commands one after the other:

roscd linedet
cmake .
rosmake

Make sure you get “0” failures while making the package.

Control module

In order to continue with this tutorial make sure you finished this tutorial.

Step 1: Open up a new terminal and enter:

roscd drone_teleop

Step 2: Go to folder named “bin” and create a file called “autonomous.py” and add the following code to it:

autonomous.py
#!/usr/bin/env python
import roslib; roslib.load_manifest('drone_teleop')
import rospy
import rosbag
from geometry_msgs.msg import Twist
from std_msgs.msg import Empty
from std_msgs.msg import Float64
from turtlesim.msg import Velocity
from turtlesim.msg import Pose
from sensor_msgs.msg import Joy
#from joy.msg import Joy
 
import sys, select, termios, tty
import time
msg = """
Reading from the keyboard  and Publishing to Twist!
  ---------------------------
up/down:       move forward/backward
left/right:    move left/right
w/s:           increase/decrease altitude
a/d:           turn left/right
1 / 2:           takeoff / land
4:             reset (toggle emergency state)
3:              toggle_cam_view(newly added)
6:		HOVER (right side top trigger)
q:              QUIT Auto
 
 
anything else: stop
 
please don't have caps lock on.                      
CTRL+c to quit                           
"""     #control portal for the user                            
xpos=0
ypos=0
xdis=0
joy_axes=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
joy_buttons= [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
move_bindings = {
		68:('linear', 'y', 0.1), #left
		67:('linear', 'y', -0.1), #right
		65:('linear', 'x', 0.1), #forward
		66:('linear', 'x', -0.1), #back
		'w':('linear', 'z', 0.1),
		's':('linear', 'z', -0.1),
		'a':('angular', 'z', 1),
		'd':('angular', 'z', -1),
	       }
def callback(RecMsg):
 
    global xpos #displacement from the bottom line to be followed
    global ypos		
    global xdis	
    xpos = RecMsg.linear
    ypos = RecMsg.angular
 
    global i
def callback1(laserp):
 
    global xdis	
    xdis=  laserp.x	
 
def callback2(joystick_cmd):
	global joy_axes
	global joy_buttons
	joy_axes = joystick_cmd.axes
	joy_buttons=joystick_cmd.buttons
 
def turnleft():
    twist.angular.z=-1
    pub.publish(twist)	
    time.sleep(1)	
    twist.angular.z=0
    pub.publish(twist)		
 
def turnright():
    hover()				
    twist.angular.z=1
    pub.publish(twist)	
    time.sleep(1)	
    twist.angular.z=0
    pub.publish(twist)	
def getKey():
	tty.setraw(sys.stdin.fileno())
	select.select([sys.stdin], [], [], 0)
	key = sys.stdin.read(1)
	termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)
	return key
def hover():
	twist.linear.x = 0; twist.linear.y = 0; twist.linear.z = 0
	twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = 0
	pub.publish(twist)
def motion():
	twist.linear.x = 0.1*joy_axes[5] #forward(+) and backward(-)(pitch) 
	twist.linear.y = 0.1*joy_axes[2] #lateral movement::LEFT(1):RIGHT(-1)(ROLL)
	twist.linear.z = 0.1*joy_axes[3] #altitude movement:UP(+):DOWN(-)
	twist.angular.x = 0 
	twist.angular.y = 0 
	twist.angular.z = 0.1*joy_axes[4]#angular::left(+)::right(-)	
	pub.publish(twist)
 
if __name__=="__main__":
    	settings = termios.tcgetattr(sys.stdin)
	print msg
 
	auto = True
	pub = rospy.Publisher('cmd_vel', Twist)
	land_pub = rospy.Publisher('/ardrone/land', Empty)
	reset_pub = rospy.Publisher('/ardrone/reset', Empty)
        toggle_pub=rospy.Publisher('/ardrone/togglecam', Empty)
	takeoff_pub =rospy.Publisher('/ardrone/takeoff', Empty)
	rospy.Subscriber('/drocanny/vanishing_points',Velocity,callback)
	rospy.Subscriber('/drone/walldis',Pose,callback1)
	rospy.Subscriber('/joy',Joy,callback2)
        twist = Twist()
	rospy.init_node('drone_teleop')
 
 
	try:
		while(True):
                        i=0
 
			#key = 	getKey()
                        buttonvalue=  joy_buttons
			if buttonvalue[1] == 1:
				land_pub.publish(Empty())
				time.sleep(0.25)
			if buttonvalue[3] == 1:
				reset_pub.publish(Empty())
				time.sleep(1.5)
				land_pub.publish(Empty())
				time.sleep(0.25)
			if buttonvalue[0] == 1:				
				takeoff_pub.publish(Empty())
				time.sleep(0.25)
			if buttonvalue[2] == 1:
		     		toggle_pub.publish(Empty())
				time.sleep(0.25)
			if buttonvalue[7]==1:
				time.sleep(0.25)
				while(True):
					buttonvalue= joy_buttons
			     		toggle_pub.publish(Empty())					
					time.sleep(0.25)#switch to bottom view				
					print 'autonomous node'									
					i=i+1
					time.sleep(0.10)		
                                        twist.linear.y=0.0011*(ypos)
                                        if(i<10):
						twist.linear.x=0.1	
					else:
						twist.linear.x=0.0
					pub.publish(twist)
					if(i>=60):
						i=0
					if(buttonvalue[7]==1):
						print 'quit'
						break
					toggle_pub.publish(Empty())					
					time.sleep(0.25)#switch to front view
					if(xdis==1):
				  		print 'yup'
						land_pub.publish(Empty())
						time.sleep(1)
				time.sleep(0.25)	
			if buttonvalue[5] == 1:
				hover()
 
			# takeoff and landing
 
			motion()
			#pub.publish(twist)
 
	except Exception as e:
		print e
		print repr(e)
 
	finally:
		twist = Twist()
		twist.linear.x = 0; twist.linear.y = 0; twist.linear.z = 0
		twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = 0
		pub.publish(twist)
 
    		termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)

Step 3: Go to the folder bin in drone_teleop and right lick on the file “autonomous.py” and in the permissions tab click the check box “Allow executing file as a program”.

Setting up the environment

Step 1: The test was done on an environment like in the picture below.

setup.jpg

This is just an example of an ideal environment. The code I have given is in developmental stage and a research is still being done on that.

Step 2: Make sure you have done the following things before you fly.

1. Interface the drone with laptop 2. Make sure you have connected the joystick to the computer (logitech dual controller)

Test the drone

Enter the following commands each in a new terminal:

roscore
rosrun ardrone_brown ardrone_driver
rosrun joy joy_node
rosrun obsdet obsdet
rosrun linedet linedet
rosrun drone_teleop autonomous.py

With the environment setup and line set on the floor.Press button 1 in the joystick to take off. Place the drone directly above the line and hit the button 8. Now the drone is in autonomous mode. Your controls wont have any effect here.It should be following the line now.To have the control of the drone hit button 8 again and it should be in manual mode.

I again repeat that the work done here is a research based work and not a solid end user product.But it is certainly capable of being improved upon and do more autonomous aerial vehicle surveillance operations.

Video


ardrone_obstacle_avoidance.txt · Last modified: 2016/11/06 02:55 by dwallace