User Tools

Site Tools


pegasus_cam_homogoraphy

Networked Video Homography using the Pegasus Cam

Author: Dylan Wallace Email: wallad3@unlv.nevada.edu
Date: Last modified on 03/04/19
Keywords: Pegasus Cam, Image Homography, Real-time

This tutorial shows how to implement real-time networked video homography using the Pegasus Cam. The big picture problem is to allow small teams of robots to collaborate using shared visual information. Solving this partially or completely is important because it will allow for teams of robots to collaborate in scenarios such as disaster rescue and cleanup, emergency response, and human-robot collaboration. This tutorial demonstrates how to implement real-time networked video homography using Python OpenCV and the Pegasus Cam. This tutorial takes approximately 1 hour to complete.

The source code for the real-time networked homography with the Pegasus Cams can be seen on the Pegasus Cam Github.

Motivation and Audience

This tutorial's motivation is to show to to allow multiple robots to collaborate through shared visual information. This tutorial assumes the reader has the following background and interests:

* Know how to use the basics of the Linux command line interface (CLI)
* Working knowledge of Python
* Perhaps additional background needed may include OpenCV experience

The rest of this tutorial is presented as follows:

Homography Overview

Homography is the process of meshing two visual planes using key point in the image as 3D reference points. The algorithm for running the real-time homography program is quite simple. First we get the Pegasus Cam streams. Then for every frame in the two streams:

1. Extract keypoints using Difference of Gaussians 2. Extract invariant descriptors using SIFT/SURF 3. Match keypoints between the two images using the descriptors 4. Use RANSAC to create a homography matrix from the matched feature vectors 5. Apply the homography matrix transformation

However, in order to optimize this program for real-time, it is more desirable to complete steps 1-4 for the first few frames, and then use that homography matrix for the rest of the stream, only applying the transformation for every frame of the two video streams.

Good examples of homography in common life is creating a panoramic image with your smartphone.

Code Overview

The code for the real-time homography was implemented in Python. The full code with detailed comments can be seen below.

# USAGE
# python realtime_stitching.py
 
# import the necessary packages
from __future__ import print_function
from pyimagesearch.basicmotiondetector import BasicMotionDetector
from pyimagesearch.panorama import Stitcher
from imutils.video import VideoStream
import numpy as np
import datetime
import imutils
import time
import cv2
 
# initialize the Pegasus Cam video streams and allow them to warmup
print("[INFO] starting cameras...")
leftStream = cv2.VideoCapture("http://192.168.50.22:3000/html/cam_pic_new.php?time=9999999999999&pDelay=40000")
rightStream = cv2.VideoCapture("http://192.168.50.23:3000/html/cam_pic_new.php?time=9999999999999&pDelay=40000")
time.sleep(2.0)
 
# initialize the image stitcher, motion detector, and total
# number of frames read
stitcher = Stitcher()
motion = BasicMotionDetector(minArea=500)
total = 0
 
# loop over frames from the video streams
while(1):
	# grab the frames from their respective video streams
	retl, left = leftStream.read()
	retr, right = rightStream.read()
 
	# resize the frames
	left = imutils.resize(left, width=512)
	right = imutils.resize(right, width=512)
 
	# stitch the frames together to form the panorama
	# IMPORTANT: you might have to change this line of code
	# depending on how your cameras are oriented; frames
	# should be supplied in left-to-right order
	result = stitcher.stitch([left, right])
 
	# no homograpy could be computed
	if result is None:
		print("[INFO] homography could not be computed")
		break
 
	# convert the panorama to grayscale, blur it slightly, update
	# the motion detector
	gray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY)
	gray = cv2.GaussianBlur(gray, (21, 21), 0)
	locs = motion.update(gray)
 
	# only process the panorama for motion if a nice average has
	# been built up
	if total > 32 and len(locs) > 0:
		# initialize the minimum and maximum (x, y)-coordinates,
		# respectively
		(minX, minY) = (np.inf, np.inf)
		(maxX, maxY) = (-np.inf, -np.inf)
 
		# loop over the locations of motion and accumulate the
		# minimum and maximum locations of the bounding boxes
		for l in locs:
			(x, y, w, h) = cv2.boundingRect(l)
			(minX, maxX) = (min(minX, x), max(maxX, x + w))
			(minY, maxY) = (min(minY, y), max(maxY, y + h))
 
		# draw the bounding box
		cv2.rectangle(result, (minX, minY), (maxX, maxY),
			(0, 0, 255), 3)
 
	# increment the total number of frames read and draw the 
	# timestamp on the image
	total += 1
	timestamp = datetime.datetime.now()
	ts = timestamp.strftime("%A %d %B %Y %I:%M:%S%p")
	cv2.putText(result, ts, (10, result.shape[0] - 10),
		cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
 
	# show the output images
	cv2.imshow("Result", result)
	cv2.imshow("Left Frame", left)
	cv2.imshow("Right Frame", right)
	key = cv2.waitKey(1) & 0xFF
 
	# if the `q` key was pressed, break from the loop
	if key == ord("q"):
		break
 
# do a bit of cleanup
print("[INFO] cleaning up...")
print(total)
cv2.destroyAllWindows()
leftStream.release()
rightStream.release()


Demonstration

After implementing the real-time homography code form the previous section, you should get results similar to the video below.


Final Words

This tutorial's objective was to show how to implement real-time video homography using the Pegasus Cam. Complete source code as well as descriptions of the code was provided. Once the concepts were conveyed the reader should be able to implement real-time video homography using their Pegasus Cam or with any other OpenCV setup.

Speculating future work derived from this tutorial, includes homography with n source cameras and multiple perspective homography for varying visual detail. In the big picture, the problem of small-team visual collaboration can be solved with this tutorial.

For questions, clarifications, etc, Email: wallad3@unlv.nevada.edu

pegasus_cam_homogoraphy.txt · Last modified: 2019/03/08 04:54 by dwallace