User Tools

Site Tools


pegasus_cam_homogoraphy

This is an old revision of the document!


Networked Video Homography using the Pegasus Cam

Author: Dylan Wallace Email: wallad3@unlv.nevada.edu
Date: Last modified on 03/04/19
Keywords: Pegasus Cam, Image Homography, Real-time

This tutorial shows how to implement real-time networked video homography using the Pegasus Cam. The tutorial covers Aruco Marker Detection, Facial Tracking, and Ball Tracking. This tutorials will help the user to become more comfortable using the Pegasus cam for computer vision, and will demonstrate how powerful the Pegasus Cam framework is for networked computer vision. This tutorial takes around 2 hours to complete.

Motivation and Audience

This tutorial's motivation is to show to to allow multiple robots to collaborate through shared visual information. This tutorial assumes the reader has the following background and interests:

* Know how to use the basics of the Linux command line interface (CLI)
* Working knowledge of Python
* Perhaps additional background needed may include OpenCV experience

The rest of this tutorial is presented as follows:

Homography Overview


Code Overview


Demonstration


Final Words

This tutorial's objective was to show how to implement real-time video homography using the Pegasus Cam. Complete source code as well as descriptions of the code was provided. Once the concepts were conveyed the reader should be able to implement real-time video homography using their Pegasus Cam or with any other OpenCV setup.

Speculating future work derived from this tutorial, includes homography with n source cameras and multiple perspective homography for varying visual detail. In the big picture, the problem of small-team visual collaboration can be solved with this tutorial.

For questions, clarifications, etc, Email: wallad3@unlv.nevada.edu

pegasus_cam_homogoraphy.1551784664.txt.gz · Last modified: by dwallace