User Tools

Site Tools


vrkin

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vrkin [2018/04/23 20:00] – [Parts Lists and Sources] keitaronishimuravrkin [2018/04/24 11:26] (current) – [Importing Video Feed into HMD] keitaronishimura
Line 16: Line 16:
 \\ \\
    *[[vrkin#Parts Lists and Sources|Parts Lists and Sources]]    *[[vrkin#Parts Lists and Sources|Parts Lists and Sources]]
-   *[[vrkin#Installing OpenVR and Kinect V2 package|Installing OpenVR and Kinect V2 package]] +   *[[vrkin#Installing OpenVR and Kinect V2 package|Installing OpenCV and Kinect V2 package]] 
-   *[[vrkin#OpenVR Code|OpenVR Code]]+   *[[vrkin#OpenCV Code|OpenCV Code]]
    *[[vrkin#Importing Video Feed into HMD|Importing Video Feed into HMD]]    *[[vrkin#Importing Video Feed into HMD|Importing Video Feed into HMD]]
  
Line 28: Line 28:
 |Tripod mounts for lighthouse base stations| Amazon or build in-house | https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=tripod+for+camera | 2| Since the base stations have a standard camera mounting hole on the bottom any standard camera tripod should work. You can also use scrap pieces of 2x4 wood to create your own mount for them.| |Tripod mounts for lighthouse base stations| Amazon or build in-house | https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=tripod+for+camera | 2| Since the base stations have a standard camera mounting hole on the bottom any standard camera tripod should work. You can also use scrap pieces of 2x4 wood to create your own mount for them.|
 |A computer with Ubuntu 16.04| Amazon | https://www.amazon.com/|1| The Alienware 17 laptop in the lab should is dual boot and should work well | |A computer with Ubuntu 16.04| Amazon | https://www.amazon.com/|1| The Alienware 17 laptop in the lab should is dual boot and should work well |
-==== Installing OpenVR and Kinect V2 package ====+|Kinect for Xbox One| Amazon | [[https://www.amazon.com/Xbox-One-Kinect-Sensor/dp/B00INAX3Q2/ref=pd_lpo_vtph_bs_t_1?_encoding=UTF8&psc=1&refRID=9NTNNSPE7081Z41C1TZE|link ]] | 1 | You will also need to use a converter plate to mount it on HUBO and use a converter to interface with a computer | 
 +|Kinect Adapter for Windows 10| Amazon | [[https://www.amazon.com/Kinect-Adapter-Windows-Xbox-One-Generic/dp/B07CC3WRVF/ref=pd_sbs_63_3?_encoding=UTF8&pd_rd_i=B07CC3WRVF&pd_rd_r=GC9Q2CQVCY2NBHN0PA4R&pd_rd_w=6TgUi&pd_rd_wg=W2K2F&psc=1&refRID=GC9Q2CQVCY2NBHN0PA4R|link]] | 1 | You will need a usb 3.0 port to interface with the converter | 
 +|Plate to Mount Kinect on HUBO| 3D print | No link | 1 | You will need to 3d print a converter to mount the Kinect on Hubo's upper body | 
 +==== Installing OpenCV and Kinect V2 package ==== 
 +First, you will need to install the following packages on the computer with Ubuntu 16.04 installed.  
 +\\ 
 +   *[[http://wiki.ros.org/kinetic/Installation/Ubuntu|Install ROS Kinetic]] 
 +\\ 
 +{{ vive_tut:opencv:2-3.png?400 }} 
 +\\ 
 +<fc red> 
 +Make sure to do [[http://wiki.ros.org/ROS/Tutorials|ROS tutorials 1~4]] 
 +{{ vive_tut:opencv:2-4.png?400 }} 
 +</fc> 
 +   *[[http://wiki.ros.org/web_video_server|Make sure to install the web video server package]] 
 +\\ 
 +{{ vive_tut:opencv:2-6.png?400 }} 
 +\\ 
 +   *[[https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html|Install OpenCV]] 
 +\\   
 +{{ vive_tut:opencv:2-2.png?400 }} 
 +   *[[https://github.com/code-iai/iai_kinect2#install|Install Kinect V2 package]] 
 +\\ 
 +{{ vive_tut:opencv:2-1.png?400 }} 
 +<fc red> 
 +Note: Though the installation uses Ubuntu 14.04 the 16.04 version will work just fine 
 +</fc> 
 +==== OpenCV Code ==== 
 +This tutorial will not go over the OpenCV code and how it works in detail. This tutorial will just go over how to integrate the different aspects. Please refer to the [[https://docs.opencv.org/3.4.0/d9/df8/tutorial_root.html|OpenCV tutorials]] for more information. 
 +\\ 
 +1. Go to your catkin workspace (catkin_ws) and create a new package to house the OpenCV code. 
 +\\ 
 +2. In the src directory make a main.cpp file and add the following code to it: 
 +<code cpp> 
 +//! [headers] 
 +#include <iostream> 
 +#include <stdio.h> 
 +#include <iomanip> 
 +#include <time.h> 
 +#include <signal.h> 
 +#include <opencv2/opencv.hpp> 
 +#include <opencv2/imgproc/imgproc.hpp> 
 +#include <opencv2/highgui/highgui.hpp>
  
-==== OpenVR Code ====+#include <libfreenect2/libfreenect2.hpp> 
 +#include <libfreenect2/frame_listener_impl.h> 
 +#include <libfreenect2/registration.h> 
 +#include <libfreenect2/packet_pipeline.h> 
 +#include <libfreenect2/logger.h>
  
 +#include <ros/ros.h>
 +#include <image_transport/image_transport.h>
 +#include <cv_bridge/cv_bridge.h>
 +#include <sensor_msgs/image_encodings.h>
 +#include <geometry_msgs/TwistStamped.h>
 +
 +//! [headers]
 +
 +
 +bool protonect_shutdown = false; // Whether the running application should shut down.
 +
 +void sigint_handler(int s)
 +{
 +  protonect_shutdown = true;
 +}
 +
 +void HSVOnly(int, int, int, int, int, int);
 +void Location();
 +
 +cv::Mat imgCircles;
 +cv::Mat imgThresholded;
 +cv::Mat rgbd_flip;
 +cv::Mat test;
 +cv::Mat rgbmat_flip;
 +
 +int iLowH = 22;
 +int iHighH = 34;
 +int iLowS = 38;
 +int iHighS = 244;
 +int iLowV = 160;
 +int iHighV = 255;
 +int posX;
 +int posY;
 +int posY_test;
 +int cenX;
 +int cenY;
 +float hand_x;
 +float hand_y;
 +float hand_z;
 +float dd;
 +int something = 0;
 +
 +using namespace cv;
 +using namespace std;
 +
 +void hand_fk(const geometry_msgs::TwistStamped::ConstPtr& ts)
 +{
 +
 + hand_x = 1000*ts->twist.linear.x;
 + hand_y = 1000*ts->twist.linear.y;
 + hand_z = 1000*ts->twist.linear.z;
 +
 +
 +}
 +
 +
 +int main(int argc, char** argv)
 +{
 +
 +    ros::init(argc, argv, "image_converter");
 +    ros::NodeHandle nh;
 +    image_transport::ImageTransport it(nh);
 +    image_transport::Publisher pub = it.advertise("test/test", 1);
 +    image_transport::Publisher rgb_pub = it.advertise("test/rgb", 1);
 +    ros::Subscriber sub = nh.subscribe("/fk", 10, hand_fk);
 +
 +
 +    std::cout << "Streaming from Kinect One sensor!" << std::endl;
 +
 +    //! [context]
 +    libfreenect2::Freenect2 freenect2;
 +    libfreenect2::Freenect2Device *dev = 0;
 +    libfreenect2::PacketPipeline *pipeline = 0;
 +    //! [context]
 +
 +    //! [discovery]
 +    if(freenect2.enumerateDevices() == 0)
 +    {
 +        std::cout << "no device connected!" << std::endl;
 +        return -1;
 +    }
 +
 +    string serial = freenect2.getDefaultDeviceSerialNumber();
 +
 +    std::cout << "SERIAL: " << serial << std::endl;
 +
 +    if(pipeline)
 +    {
 +        //! [open]
 +        dev = freenect2.openDevice(serial, pipeline);
 +        //! [open]
 +    } else {
 +        dev = freenect2.openDevice(serial);
 +    }
 +
 +    if(dev == 0)
 +    {
 +        std::cout << "failure opening device!" << std::endl;
 +        return -1;
 +    }
 +
 +    signal(SIGINT, sigint_handler);
 +    protonect_shutdown = false;
 +
 +    //! [listeners]
 +    libfreenect2::SyncMultiFrameListener listener(libfreenect2::Frame::Color |
 +                                                  libfreenect2::Frame::Depth |
 +                                                  libfreenect2::Frame::Ir);
 +    libfreenect2::FrameMap frames;
 +
 +    dev->setColorFrameListener(&listener);
 +    dev->setIrAndDepthFrameListener(&listener);
 +    //! [listeners]
 +
 +    //! [start]
 +    dev->start();
 +
 +    std::cout << "device serial: " << dev->getSerialNumber() << std::endl;
 +    std::cout << "device firmware: " << dev->getFirmwareVersion() << std::endl;
 +    //! [start]
 +
 +    //! [registration setup]
 +    libfreenect2::Registration* registration = new libfreenect2::Registration(dev->getIrCameraParams(), dev->getColorCameraParams());
 +    libfreenect2::Frame undistorted(512, 424, 4), registered(512, 424, 4), depth2rgb(1920, 1080 + 2, 4);
 +    //! [registration setup]
 +
 +    Mat rgbmat, depthmat, depthmatUndistorted, irmat, rgbd, rgbd2;
 +
 +    
 +\
 +    //! [loop start]
 +    while(!protonect_shutdown)
 +    {
 + ros::spinOnce();
 +
 +        listener.waitForNewFrame(frames);
 +        libfreenect2::Frame *rgb = frames[libfreenect2::Frame::Color];
 +        libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
 +        libfreenect2::Frame *depth = frames[libfreenect2::Frame::Depth];
 +        //! [loop start]
 +
 + //! [converting libfreenect2 to opencv mat]
 +        cv::Mat(rgb->height, rgb->width, CV_8UC4, rgb->data).copyTo(rgbmat);
 +        cv::Mat(ir->height, ir->width, CV_32FC1, ir->data).copyTo(irmat);
 +        cv::Mat(depth->height, depth->width, CV_32FC1, depth->data).copyTo(depthmat);
 + cv::flip(depthmat,test,1);
 + cv::flip(rgbmat,rgbmat_flip,1);
 + //! [converting libfreenect2 to opencv mat]
 +
 +        //! [registration]
 +        registration->apply(rgb, depth, &undistorted, &registered, true, &depth2rgb);
 +        //! [registration]
 +
 + //! [converting libfreenect2 to opencv mat]
 +        cv::Mat(registered.height, registered.width, CV_8UC4, registered.data).copyTo(rgbd);
 + cv::flip(rgbd,rgbd_flip, 1);
 + //! [converting libfreenect2 to opencv mat]
 +
 + //! [finding yellow objects]
 +        HSVOnly(iLowH, iHighH, iLowS, iHighS, iLowV, iHighV);
 +        Location();
 + //! [finding yellow objects]
 +
 + int posY_comp = posY - 150; 
 +
 + if(something <= 10)
 + {
 + posY_test = posY;
 + }
 +
 +
 +
 +
 +
 +
 +while(something < 1)
 +    {
 + dd = test.at<float>(Point(posX,posY_test));
 + printf("x: %i, y: %i, d: %f\n",posX, posY_test, dd);
 +
 + if(something == 0 && dd != 0 && dd < 10000)
 + {
 + posY_test = posY_test - 1;
 +
 + }
 + if(something == 0 && dd == 0)
 + {
 + something = 1;
 + posY_test = posY_test + 7;
 + }
 + if(something == 1 && dd == 0)
 + {
 + posY_test = posY_test - 1;
 + }
 + if(something == 1 && dd != 0 )
 + {
 + something = 2;
 + }
 + if(something == 2 && dd < 10000 && dd != 0)
 + {
 + posY_test = posY_test - 1;
 +
 + }
 + if(something == 2 && dd == 0)
 + {
 + something = 3;
 + }
 + if(something == 3 && dd == 0)
 + {
 + something = 11;
 + posY_test = posY_test + 4;
 + dd = test.at<float>(Point(posX,posY_test));
 + printf("x: %i, y: %i, d: %f\n",posX, posY_test, dd);
 + }
 + if(dd > 10000)
 + {
 + something = 11;
 + }
 +
 +
 +    }
 +
 + //! [adding distance data to image]
 + dd = test.at<float>(Point(posX,posY_test));
 + float dd_comp = dd+160;
 + float diff = dd_comp - hand_x;
 + cv::Point org;
 + cv::Point rgb_org;
 +        org.x = posX-100;
 +        org.y = posY_comp-50;
 + rgb_org.x = 3.5*posX;
 + rgb_org.y = posY_test;
 +        char str[200];
 +
 +        sprintf(str, "%.1fmm",diff);
 +        cv::putText( rgbd_flip, str, org, 0, 1, Scalar(255,255,0), 2, 8);
 +        cv::putText( rgbmat_flip, str, rgb_org, 0, 1, Scalar(255,255,0), 2, 8);
 + //! [adding distance data to image]
 +
 +
 + sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgra8", rgbd_flip).toImageMsg();
 + sensor_msgs::ImagePtr rgb_msg = cv_bridge::CvImage(std_msgs::Header(), "bgra8", rgbmat_flip).toImageMsg();
 +        pub.publish(msg);
 +        rgb_pub.publish(rgb_msg);
 +
 + if (posX >= 0 && posY >= 0)
 +        {
 +     cenX = 2*posX;
 +     cenY = 2*posY_test;
 +     //cenY = 2*posY_comp;
 +            //Draw a red circle encompassing object
 +            circle(rgbd_flip, Point(cenX, cenY), 25, Scalar(0, 0, 255), 1, 8, 1);
 +            circle(rgbd_flip, Point(cenX, cenY), 26, Scalar(0, 0, 255), 1, 8, 1);
 +            circle(rgbd_flip, Point(cenX, cenY), 27, Scalar(0, 0, 255), 1, 8, 1);
 +            circle(rgbd_flip, Point(cenX, cenY), 24, Scalar(0, 0, 255), 1, 8, 1);
 +            circle(rgbd_flip, Point(cenX, cenY), 23, Scalar(0, 0, 255), 1, 8, 1);
 +
 +        }
 +
 + cv::imshow("rgb", rgbmat_flip);
 +
 +        cv::imshow("registered", rgbd_flip);
 + cv::imshow("threshold" , imgThresholded);
 +
 + something = 0;
 +
 +
 +        int key = cv::waitKey(1);
 +        protonect_shutdown = protonect_shutdown || (key > 0 && ((key & 0xFF) == 27)); // shutdown on escape
 +
 +    //! [loop end]
 +        listener.release(frames);
 +    }
 +    //! [loop end]
 +
 +    //! [stop]
 +    dev->stop();
 +    dev->close();
 +    //! [stop]
 +
 +    delete registration;
 +
 +    std::cout << "Streaming Ends!" << std::endl;
 +    return 0;
 +}
 +
 +
 +void HSVOnly(int iLowH, int iHighH, int iLowS, int iHighS, int iLowV, int iHighV)
 +{
 +    cv::Mat imgHSV;
 +
 +    //Convert the captured frame from BGR to HSV
 +    cvtColor(rgbd_flip, imgHSV, COLOR_BGR2HSV);
 +
 +    //Threshold the image
 +    inRange(imgHSV, Scalar(iLowH, iLowS, iLowV), Scalar(iHighH, iHighS, iHighV), imgThresholded);
 +
 +    //morphological opening (remove small objects from the foreground)
 +    erode(imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
 +    dilate( imgThresholded, imgThresholded, getStructuringElement(MORPH_ELLIPSE, Size(5, 5)) );
 +
 +}
 +
 +
 +void Location()
 +{
 +    imgCircles = Mat::zeros( rgbd_flip.size(), CV_8UC4 );
 +    Moments oMoments = moments(imgThresholded);
 +
 +    double dM01 = oMoments.m01;
 +    double dM10 = oMoments.m10;
 +    double dArea = oMoments.m00;
 +
 +    if (dArea > 10000)
 +    {
 +        //calculate the position of the ball
 +        posX = dM10 / dArea;
 +        posY = dM01 / dArea;
 +
 +    }
 +    
 +      
 +}
 +</code>   
 +This code will take the data from the kinect and find the distance between the sensor and a yellow object. Then taking the FK data from Hubo's right hand will display the distance between the yellow object and Hubo's right hand above the detected object. How to run the code will be explained later on.
 +\\
 +Then make a CMakeLists.txt file in the package and add the following code into it:
 +<code txt>
 +cmake_minimum_required(VERSION 2.8.3)
 +project(kinect2hubo)
 +
 +set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DBT_USE_DOUBLE_PRECISION -Wall")
 +# Unused warnings
 +set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wuninitialized -Winit-self -Wunused-function -Wunused-label -Wunused-variable -Wunused-but-set-variable -Wunused-but-set-parameter")
 +# Additional warnings
 +set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Warray-bounds -Wtype-limits -Wreturn-type -Wsequence-point -Wparentheses -Wmissing-braces -Wchar-subscripts -Wswitch -Wwrite-strings -Wenum-compare -Wempty-body -Wlogical-op")
 +
 +# Check for c++11 support
 +INCLUDE(CheckCXXCompilerFlag)
 +CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
 +CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
 +IF(COMPILER_SUPPORTS_CXX11)
 +SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
 +ELSEIF(COMPILER_SUPPORTS_CXX0X)
 +SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")
 +ELSE()
 +  MESSAGE(ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
 +ENDIF()
 +
 +find_package(catkin REQUIRED COMPONENTS
 +  cv_bridge
 +  geometry_msgs
 +  message_runtime
 +  roscpp
 +  sensor_msgs
 +  std_msgs
 +  image_transport
 +)
 +find_package(freenect2 REQUIRED HINTS "$ENV{HOME}/freenect2") #for interfacing with kinect v2
 +find_package(OpenCV REQUIRED)
 +
 +include_directories(
 +  ${catkin_INCLUDE_DIRS}
 +  ${OpenCV_INCLUDE_DIRS}
 +  ${freenect2_INCLUDE_DIRS}
 +)
 +
 +add_executable(${PROJECT_NAME}_node src/main.cpp)
 +
 + target_link_libraries(${PROJECT_NAME}_node
 +   ${catkin_LIBRARIES}
 +   ${OpenCV_LIBRARIES}
 +   ${freenect2_LIBRARY}
 + )
 +</code>
 +Then make a package.xml file and add the following code:
 +<code xml>
 +<?xml version="1.0"?>
 +<package format="2">
 +  <name>kinect2hubo</name>
 +  <version>0.0.0</version>
 +  <description>The kinect2hubo package</description>
 +
 +  <maintainer email="katnimura@gmail.com">Keitaro Nishimura</maintainer>
 +
 +  <license>TODO</license>
 +
 +  <buildtool_depend>catkin</buildtool_depend>
 +  <build_depend>cv_bridge</build_depend>
 +  <build_depend>geometry_msgs</build_depend>
 +  <build_depend>roscpp</build_depend>
 +  <build_depend>sensor_msgs</build_depend>
 +  <build_depend>std_msgs</build_depend>
 +  <build_depend>image_transport</build_depend>
 +  <build_export_depend>cv_bridge</build_export_depend>
 +  <build_export_depend>geometry_msgs</build_export_depend>
 +  <build_export_depend>roscpp</build_export_depend>
 +  <build_export_depend>sensor_msgs</build_export_depend>
 +  <build_export_depend>std_msgs</build_export_depend>
 +  <build_export_depend>image_transport</build_export_depend>
 +  <exec_depend>cv_bridge</exec_depend>
 +  <exec_depend>geometry_msgs</exec_depend>
 +  <exec_depend>message_runtime</exec_depend>
 +  <exec_depend>roscpp</exec_depend>
 +  <exec_depend>sensor_msgs</exec_depend>
 +  <exec_depend>std_msgs</exec_depend>
 +  <exec_depend>image_transport</exec_depend>
 +  
 +</package>
 +</code>
 +Then you should be ready to make the package using catkin_make. 
 ==== Importing Video Feed into HMD ==== ==== Importing Video Feed into HMD ====
 +\\
 +To insert the video feed into the HMD we will use the UniversalVROverlay package on the windows 10 computer running the HTC Vive. Please download the [[https://github.com/scudzey/UVROverlay|source code]] from github and build it to create the exe file.
 +\\
 +{{ vive_tut:opencv:2-7.png?400 }}
 +\\
 +The following steps depict how to stream the video feed from the OpenCV code into the the Vive HMD.
  
 +1. Connect all computers by router
  
-First introduce how this tutorial will show the reader how to get image and depth data into the openVR exe file using pre-existing package for ros, and opencv API. +2. Run the iai kinect code
  
-First go through the kinect computer and setting it upGoing through the instalation as well as the calibration of the camera.+3Run the Opencv code
  
-Then go into how to create the image wanted with opencv on the camera computer.+4. Run the web video server
  
-Then how to use the ros package to send the stream over the network with http+5. View the stream in the Vive pc through the web explorer
  
-Finally, talk about how to get the stream from the browser into the exe file using the openvr package+6. Use the UniversalVRoverlay to input the stream into the HMD
vrkin.1524538837.txt.gz · Last modified: by keitaronishimura