User Tools

Site Tools


ugv-uav_ab

Annotated Bibliography Template

Author: Blake Hament Email: [email protected]
Date: Last modified on 04/20/2017
Keywords: UGV, UAV, Localization, Navigation, Visual Servoing, Heterogeneous Robot Cooperation

Papers

Zip file containing all papers coming soon… Password Protected Papers Zip File

Annotated References

1. Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction

Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction

Publisher: IROS 2013
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): image reconstruction;mobile robots;position measurement;robot vision;3D map registration;3D reconstruction;MAV monocular camera;Monte Carlo localization;air-ground localization;depth sensor;ground robot;iterative pose refinement;live dense reconstruction;map augmentation;micro aerial vehicle;monocular dense reconstruction;position estimation;sensors;vantage points;visual feature matching;Cameras;Robot kinematics;Simultaneous localization and mapping;Three-dimensional displays

Bibtex:

@INPROCEEDINGS{6696924, author={C. Forster and M. Pizzoli and D. Scaramuzza}, booktitle={2013 IEEE/RSJ International Conference on Intelligent Robots and Systems}, title={Air-ground localization and map augmentation using monocular dense reconstruction}, year={2013}, pages={3971-3978}, keywords={image reconstruction;mobile robots;position measurement;robot vision;3D map registration;3D reconstruction;MAV monocular camera;Monte Carlo localization;air-ground localization;depth sensor;ground robot;iterative pose refinement;live dense reconstruction;map augmentation;micro aerial vehicle;monocular dense reconstruction;position estimation;sensors;vantage points;visual feature matching;Cameras;Robot kinematics;Simultaneous localization and mapping;Three-dimensional displays}, doi={10.1109/IROS.2013.6696924}, ISSN={2153-0858}, month={Nov},}

This paper describes: An algorithm for combining scans from a UAV and UGV

The authors present (simulations, experiments, theory): Algo for combining maps from a UGV and UAV: egomotion estimation (SLAM on UAV and UGV respectively); Dense Reconstruction (compute multiple depth maps from UAV data, use cost function to fuse); Global localization (align UGV and UAV maps with cost function and Zero Mean Sum of Squared Differences); Pose Refinement (Iterative Closest Point)

From this presentation, the paper concludes that: The algo presented is superior to existing methods because it field tests indicate much faster runtime (ms)

From the state-of-the-art, the paper identifies challenges in: Issues in past like algo’s only working for flat planes, processing time being too slow

The paper addresses these challenges by: adding filters and regularization

The results of this approach are: Faster processing times during localization and mapping

The paper presents the following theoretical principles: (1) Simultaneous UAV/UGV SLAM;(2) Dense Reconstruction; (3) Monte Carlo Global Localization; (4) Pose Refinement

The principles are explained (choose: well/fairly/poorly): fairly

For example (fill-in-the-blank e.g. the equations, graphs, figures),: Figure 6 show the (choose: correct/questionable) application of the principles: correct

But the costmap/Monte-Carlo based global alignment in Figure 8 was difficult to interpret

From the principles and results, the paper concludes: (1) the proposed method allows fusion of maps captured from very different perspectives; (2) this was demonstrated to be valuable because the UGV's map of a 3D structure was significantly enhanced by fusion with UAV map

Blake liked this paper because:

  1. It provided a full overview of the authors' localization and mapping pipeline.
  2. The authors gave a very clear description of their Dense Reconstruction algo that helped me better understand the principles behind it.
  3. The authors employ excellent graphs to help the reader visualize trajectory errors and translation errors from point cloud operations.

I disliked this paper because: I had some trouble identifying what was being represented in some of the figures due to small image sizes and less-than-helpful captions ;
I would have liked to see Much more detail on the Monte-Carlo global localization

Three things I learned from this paper were:

  1. Digital Surface Models
  2. Dense Reconstruction
  3. Iterative Closest Point algo

Time reading and annotating: ~ 1.5 hours

2. COOPERATIVE GROUND AND AIR SURVEILLANCE

COOPERATIVE GROUND AND AIR SURVEILLANCE
Publisher: ICRA 2006
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): aerospace robotics;aircraft;cooperative systems;decentralised control;mobile robots;remotely operated vehicles;sensors;surveillance;telerobotics;air surveillance;cooperative system;decentralized control;ground surveillance;onboard sensors;unmanned aerial vehicles;unmanned ground vehicles;Cameras;Land vehicles;Object detection;Robot kinematics;Robot sensing systems;Robot vision systems;Robotics and automation;Surveillance;Uncertainty;Unmanned aerial vehicles
Bibtex:

@ARTICLE{1678135, author={B. Grocholsky and J. Keller and V. Kumar and G. Pappas}, journal={IEEE Robotics Automation Magazine}, title={Cooperative air and ground surveillance}, year={2006}, volume={13}, number={3}, pages={16-25}, keywords={aerospace robotics;aircraft;cooperative systems;decentralised control;mobile robots;remotely operated vehicles;sensors;surveillance;telerobotics;air surveillance;cooperative system;decentralized control;ground surveillance;onboard sensors;unmanned aerial vehicles;unmanned ground vehicles;Cameras;Land vehicles;Object detection;Robot kinematics;Robot sensing systems;Robot vision systems;Robotics and automation;Surveillance;Uncertainty;Unmanned aerial vehicles}, doi={10.1109/MRA.2006.1678135}, ISSN={1070-9932}, month={Sept},}

This paper describes: Active sensing with a team of heterogeneous robots for target localization and collaborative mapping

The authors present (simulations, experiments, theory):target location estimation equation based on uncertainty of measurements

From this presentation, the paper concludes that: robot teams can more efficiently search and scan an area by using the measurement uncertainties to “information surf” – picking trajectories based on following the highest information gain gradient

From the state-of-the-art, the paper identifies challenges in: real-time processing of localizing, navigation, and mapping due to high dimensionality of data

The paper addresses these challenges by: Building on decentralized estimation algorithms from linear dynamic models with assumptions of Guassian noise; active sensor network (ASN); certainty grids

The results of this approach are: the benefits of this approach include active sensing, decentralized processing, measurement trajectories that are more efficient and take advantage of the inherent strengths and weaknesses of each robot platform

The paper presents the following theoretical principles:

  1. Information Gradients/Surfing
  2. Active Sensor Network
  3. Gaussian Noise in point cloud

The principles are explained (choose: well/fairly/poorly): fairly

For example (fill-in-the-blank e.g. the equations, graphs, figures),: Figure 8. show the (choose: correct/questionable) application of the principles: correct (great representation of the iso-mutual information contours the robot experiences during “information surfing”

From the principles and results, the paper concludes:

  1. This method allows control of heterogenous robots without tailoring to the robots specific capabilities.
  2. Every robot has it's own certainty grid containing computed possibilities of target detection at various position.
  3. This certainty grid changes constantly as new information is introduced
  4. This approach is easily scalable


Blake liked this paper because:

  1. Excellent figures
  2. Information surfing is a very interesting concept that I am excited to apply
  3. I enjoyed their treatment of probabilities

I disliked this paper because: No complaints, great paper
I would have liked to see All of the work was done in 2D with height map projections to capture 3D of the environment. It would be interesting to see this work applied to a complex 3D environment like a building in which 2D points in the ground plane can map to multiple heights.

Three things I learned from this paper were:

  1. Information Surfing
  2. Control of heterogenous robots, independent of specific capabilities of the specific robots
  3. Reactive Controllers

Time reading and annotating: ~ 1.5 hours

3. ISSUES IN COOPERATIVE AIR/GROUND ROBOTIC SYSTEMS

ISSUES IN COOPERATIVE AIR/GROUND ROBOTIC SYSTEMS
Publisher: Springer Tracts in Advanced Robotics 2010
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): UAV, UGV, Cooperation, Taxonomy
Bibtex:

@Inbook{Lacroix2011, author=“Lacroix, Simon and Le Besnerais, Guy”, editor=“Kaneko, Makoto and Nakamura, Yoshihiko”, title=“Issues in Cooperative Air/Ground Robotic Systems”, bookTitle=“Robotics Research: The 13th International Symposium ISRR”, year=“2011”, publisher=“Springer Berlin Heidelberg”, address=“Berlin, Heidelberg”, pages=“421–432”, isbn=“978-3-642-14743-2”, doi=“10.1007/978-3-642-14743-2_35”, url=“http://dx.doi.org/10.1007/978-3-642-14743-2_35” }

This paper describes: Cooperation and perception schemas for ground-aerial robot teams

The authors present (simulations, experiments, theory): a taxonomy of UGV-UAV search/mapping schemes

From this presentation, the paper concludes that: The authors suggest that current G-A mapping techniques can be improved by associating data according to geometric primitives present in the environment

From the state-of-the-art, the paper identifies challenges in: stitching together images or whole maps from UGV and UAV

The paper addresses these challenges by: suggesting the addition of models to help estimation like models for the motion of the robots and the geometry of targets of interest

The results of this approach are: G-A robot teams can cooperate in 3 different scenarios (either G or A is the supporter of the other robot, or they are ~equal cooperators); in perceiving, must solve the problems of data registration and fusion

The paper presents the following theoretical principles:

  1. Localization
  2. Spin-images
  3. Traversability models
  4. Gemetric models
  5. Navigation supports

The principles are explained (choose: well/fairly/poorly): well

For example (fill-in-the-blank e.g. the equations, graphs, figures),: Fig. 5 show the (choose: correct/questionable) application of the principles: correct

From the principles and results, the paper concludes:

  1. The key problem w/ UGV-UAV cooperation is data registration and fusion
  2. Time constraints are salient in designing the algo's
  3. Use prior knowledge like motion projections and geometric properties to improve data fusion

Blake liked this paper because:

  1. Solid overview of UGV-UAV search chemes
  2. Good figures

I disliked this paper because: information was not very in-depth
I would have liked to see a taxonomy that extends beyond search/ mapping applications

Three things I learned from this paper were:

  1. Transversability Models
  2. Navigation Support
  3. UGV-UAV cooperation schemes

Time reading and annotating: ~ 1 hour


4. PLANNING FOR A GROUND-AIR ROBOTIC SYSTEM WITH COLLABORATIVE LOCALIZATION

PLANNING FOR A GROUND-AIR ROBOTIC SYSTEM WITH COLLABORATIVE LOCALIZATION
Publisher: ICRA 2016
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): autonomous aerial vehicles;PAD planner;SLC planner;UGV-UAV team operating indoors;collaborative localization;controller-based motion primitives;ground-air robotic system;high-quality localization information;payload capacity;planning adaptive dimensionality;robust navigation capabilities;state lattice planner;unmanned aerial vehicles;unmanned ground vehicles;visual features;Collaboration;Lattices;Planning;Robot sensing systems;Trajectory
Bibtex:

@INPROCEEDINGS{7487146, author={J. Butzke and K. Gochev and B. Holden and E. J. Jung and M. Likhachev}, booktitle={2016 IEEE International Conference on Robotics and Automation (ICRA)}, title={Planning for a ground-air robotic system with collaborative localization}, year={2016}, pages={284-291}, keywords={autonomous aerial vehicles;PAD planner;SLC planner;UGV-UAV team operating indoors;collaborative localization;controller-based motion primitives;ground-air robotic system;high-quality localization information;payload capacity;planning adaptive dimensionality;robust navigation capabilities;state lattice planner;unmanned aerial vehicles;unmanned ground vehicles;visual features;Collaboration;Lattices;Planning;Robot sensing systems;Trajectory}, doi={10.1109/ICRA.2016.7487146}

This paper describes: A state lattice planner using controller-based motion primitives (SLC) that uses planning with adaptive dimensionality (PAD)

The authors present (simulations, experiments, theory): experiments using a state lattice planner using controller-based motion primitives (SLC) that uses planning with adaptive dimensionality (PAD)

From this presentation, the paper concludes that: success rates and trial stats showing the SLC PAD approach has a high rate of success and quick processing times

From the state-of-the-art, the paper identifies challenges in: real-time planning for heterogeneous robot teams in which the robots do not need to travel in formation

The paper addresses these challenges by: This paper proposes PAD such that only the relevant dimensions at a given point are considered. Ex: When moving a piano inside from the street, planning orientation is irrelevant in the drive-way but essential once you get to the doorway.

The results of this approach are: superior success rates and times vs. existing methods

The paper presents the following theoretical principles:

  1. State Lattice Controller (SLC)
  2. Motion Primitives
  3. Planning with Adaptive Dimensionality (PAD)


The principles are explained (choose: well/fairly/poorly): fairly

For example (fill-in-the-blank e.g. the equations, graphs, figures),: Table 1 show the (choose: correct/questionable) application of the principles: correct, but many of the figures are not very educational or interesting

From the principles and results, the paper concludes:

  1. Existing methods of localization for UGV-UAV teams are lacking because of slow processing times and poor assumptions made by planners
  2. PAD allows for much faster processing times


Blake liked this paper because: (fill-in-the-blank with at least 3 reasons if possible).

I disliked this paper because: the authors didn't provide a toolkit with their algo's ;)
I would have liked to see the same work done without assuming any prior knowledge of the map

Three things I learned from this paper were:

  1. state-based planning
  2. planning with adaptive dimensionality
  3. controlling with motion primitives

Time reading and annotating: ~ 2 hours


5. A Tutorial on Visual Servo Control

A Tutorial on Visual Servo Control
Publisher: ITRA 1996
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): Jacobian matrices;correlation methods;feature extraction;feedback;image representation;motion control;optical tracking;robot dynamics;robot vision;servomechanisms;computer vision;coordinate transformations;correlation-based methods;feedback;image Jacobian;image feature tracking;image formation process;image-based systems;position-based system;robotic manipulators;tutorial;velocity representation;visual servo control;Control systems;Costs;Manipulators;Manufacturing;Robot control;Robot sensing systems;Robot vision systems;Servosystems;Tutorial;Visual servoing
Bibtex:
@ARTICLE{538972, author={S. Hutchinson and G. D. Hager and P. I. Corke}, journal={IEEE Transactions on Robotics and Automation}, title={A tutorial on visual servo control}, year={1996}, volume={12}, number={5}, pages={651-670}, keywords={Jacobian matrices;correlation methods;feature extraction;feedback;image representation;motion control;optical tracking;robot dynamics;robot vision;servomechanisms;computer vision;coordinate transformations;correlation-based methods;feedback;image Jacobian;image feature tracking;image formation process;image-based systems;position-based system;robotic manipulators;tutorial;velocity representation;visual servo control;Control systems;Costs;Manipulators;Manufacturing;Robot control;Robot sensing systems;Robot vision systems;Servosystems;Tutorial;Visual servoing}, doi={10.1109/70.538972}, ISSN={1042-296X}, month={Oct},}


This paper describes: A taxonomy of and instructions for visual servoing (VS)

The authors present (simulations, experiments, theory): A text tutorial containing the equations that govern the various approaches to VS

From this presentation, the paper concludes that: A disclaimer that the paper presents a fundamental introduction, and readers should follow up with cited papers relevant to the VS they seek to implement.

From the state-of-the-art, the paper identifies challenges in: processing times and environments with low visibility

The paper addresses these challenges by: image-based rather than position-based VS; better camera positioning

The results of this approach are: referenced in the 80+ citations in which they were implemented

The paper presents the following theoretical principles:

  1. End point open vs. closed loop control
  2. Position vs. image based control
  3. Dynamic look-and-move vs. direct visual servo


The principles are explained (choose: well/fairly/poorly): well

For example (fill-in-the-blank e.g. the equations, graphs, figures),: Figs 3-6 show the (choose: correct/questionable) application of the principles: correct and very clear comparison of various VS architectures

From the principles and results, the paper concludes:

  1. If the system is tracking movement of a target with known movement in cartesian coordinates, position-based tracking makes sense
  2. Otherwise, image-based tracking works better because it can be done independently of errors in robot kinematics or camera calibration


Blake liked this paper because:

  1. It described relevant image convolution techniques
  2. It gave a concise but broad overview of the VS field
  3. It helped me refine the VS architecture most suited to UGV-UAV docking


I disliked this paper because: I had to stop very frequently to look up computer vision or controls vocabulary;
I would have liked to see more intermediary steps articulated

Three things I learned from this paper were:

  1. Taxonomy of VS methods
  2. Several specific image matrix operations that are essential to VS
  3. Image-based VS tends to be more accurate, especially for the types of applications I will be implementing


Time reading and annotating: ~ 5 hours


6. A visual servoing docking approach for marsupial robotic system

A visual servoing docking approach for marsupial robotic system
Publisher: IEEE 2014
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): cameras;feedback;mobile robots;multi-robot systems;robot vision;virtual machines;visual servoing;adistance state;aiming state;angular camera;around state;atangent state;blind state;child robot;decision-making unit;docking heading orientation;docking motion guide;image feature feedback;image-infor virtual machine;marsupial robotic system;parking state;pose refresher;rotational DOF;simulation platform design;task modelling;transform conditions;vertical V-shaped visual benchmark design;visual servoing docking approach;Benchmark testing;Cameras;Decision making;Robot kinematics;Robot vision systems;Virtual machining;Marsupial Robotic system;docking;image-infor virtual machine;visual servoing
Bibtex:
@INPROCEEDINGS{6896395, author={P. Zhao and Z. Cao and L. Xu and C. Zhou and D. Xu}, booktitle={Proceedings of the 33rd Chinese Control Conference}, title={A visual servoing docking approach for marsupial robotic system}, year={2014}, pages={8321-8325}, keywords={cameras;feedback;mobile robots;multi-robot systems;robot vision;virtual machines;visual servoing;adistance state;aiming state;angular camera;around state;atangent state;blind state;child robot;decision-making unit;docking heading orientation;docking motion guide;image feature feedback;image-infor virtual machine;marsupial robotic system;parking state;pose refresher;rotational DOF;simulation platform design;task modelling;transform conditions;vertical V-shaped visual benchmark design;visual servoing docking approach;Benchmark testing;Cameras;Decision making;Robot kinematics;Robot vision systems;Virtual machining;Marsupial Robotic system;docking;image-infor virtual machine;visual servoing}, doi={10.1109/ChiCC.2014.6896395}, month={July},}


This paper describes: A docking method for a child robot loading into a compartment in a mother robot

The authors present (simulations, experiments, theory): task model for docking, algo, and simulated results

From this presentation, the paper concludes that: the simulated results confirm the validity of the docking approach and it's anti-interrupt ability

From the state-of-the-art, the paper identifies challenges in: Controls for and implementation of marsupial robots

The paper addresses these challenges by: applying standard color and geometry triggered VS controls to docking of marsupial robots

The results of this approach are: Quick docking times, robust performance even with unexpected perturbations in the robots motion/rotation from the researchers

The paper presents the following theoretical principles:

  1. Marsupial robotics
  2. Visual Servoing
  3. Decision-making in Different States


The principles are explained (choose: well/fairly/poorly): well

For example (fill-in-the-blank e.g. the equations, graphs, figures),: Fig. 4 show the (choose: correct/questionable) application of the principles:correct application of the decision-making algo

From the principles and results, the paper concludes:

  1. Future work will focus on retrieval of child robots


Blake liked this paper because:

  1. Almost identical VS docking strategy to my method for UGV docking in a box suspended from a UAV
  2. They showed their rotation and transformation matrices which will be a good check for my future work if I pursue the box method


I disliked this paper because: Although I am happy for the walkthrough of their project as I might compare my results at various stages, I am not sure they pushed any borders of knowledge ;
I would have liked to see More discussion of the simulation

Three things I learned from this paper were:

  1. Vocabulary for the stages in docking
  2. “V” approach to docking with VS
  3. Pose refreshment in simulation– could apply similar equations to VR


Time reading and annotating: ~ 15 min


7. Multi-rotor drone tutorial: systems, mechanics, control and state estimation

Multi-rotor drone tutorial: systems, mechanics, control and state estimation
Publisher: Intelligent Service Robotics 2017
Keywords (platform, field of research, algorithm or approach/methodologies, more details ): Components · Control · Modeling · Multi-rotor drone · Sensor fusion
Bibtex:
@Article{Yang2017, author=“Yang, Hyunsoo and Lee, Yongseok and Jeon, Sang-Yun and Lee, Dongjun”, title=“Multi-rotor drone tutorial: systems, mechanics, control and state estimation”, journal=“Intelligent Service Robotics”, year=“2017”, volume=“10”, number=“2”, pages=“79–93”, issn=“1861-2784”, doi=“10.1007/s11370-017-0224-y”, url=“http://dx.doi.org/10.1007/s11370-017-0224-y” }


This paper describes:

The authors present (simulations, experiments, theory):

From this presentation, the paper concludes that:

From the state-of-the-art, the paper identifies challenges in:

The paper addresses these challenges by:

The results of this approach are:

The paper presents the following theoretical principles:

  1. Ordered List Item


The principles are explained (choose: well/fairly/poorly):

For example (fill-in-the-blank e.g. the equations, graphs, figures),: show the (choose: correct/questionable) application of the principles:

From the principles and results, the paper concludes:


Blake liked this paper because:

  1. 1
  2. 2
  3. 3


I disliked this paper because: ;
I would have liked to see

Three things I learned from this paper were:

  1. 1
  2. 2
  3. 3


Time reading and annotating: ~ hours



.

ugv-uav_ab
Publisher:
Keywords (platform, field of research, algorithm or approach/methodologies, more details ):
Bibtex:


This paper describes:

The authors present (simulations, experiments, theory):

From this presentation, the paper concludes that:

From the state-of-the-art, the paper identifies challenges in:

The paper addresses these challenges by:

The results of this approach are:

The paper presents the following theoretical principles:

  1. Ordered List Item


The principles are explained (choose: well/fairly/poorly):

For example (fill-in-the-blank e.g. the equations, graphs, figures),: show the (choose: correct/questionable) application of the principles:

From the principles and results, the paper concludes:


Blake liked this paper because:

  1. 1
  2. 2
  3. 3


I disliked this paper because: ;
I would have liked to see

Three things I learned from this paper were:

  1. 1
  2. 2
  3. 3


Time reading and annotating: ~ hours



.

ugv-uav_ab
Publisher:
Keywords (platform, field of research, algorithm or approach/methodologies, more details ):
Bibtex:


This paper describes:

The authors present (simulations, experiments, theory):

From this presentation, the paper concludes that:

From the state-of-the-art, the paper identifies challenges in:

The paper addresses these challenges by:

The results of this approach are:

The paper presents the following theoretical principles:

  1. Ordered List Item


The principles are explained (choose: well/fairly/poorly):

For example (fill-in-the-blank e.g. the equations, graphs, figures),: show the (choose: correct/questionable) application of the principles:

From the principles and results, the paper concludes:


Blake liked this paper because:

  1. 1
  2. 2
  3. 3


I disliked this paper because: ;
I would have liked to see

Three things I learned from this paper were:

  1. 1
  2. 2
  3. 3


Time reading and annotating: ~ hours

.

ugv-uav_ab
Publisher:
Keywords (platform, field of research, algorithm or approach/methodologies, more details ):
Bibtex:


This paper describes:

The authors present (simulations, experiments, theory):

From this presentation, the paper concludes that:

From the state-of-the-art, the paper identifies challenges in:

The paper addresses these challenges by:

The results of this approach are:

The paper presents the following theoretical principles:

  1. Ordered List Item


The principles are explained (choose: well/fairly/poorly):

For example (fill-in-the-blank e.g. the equations, graphs, figures),: show the (choose: correct/questionable) application of the principles:

From the principles and results, the paper concludes:


Blake liked this paper because:

  1. 1
  2. 2
  3. 3


I disliked this paper because: ;
I would have liked to see

Three things I learned from this paper were:

  1. 1
  2. 2
  3. 3


Time reading and annotating: ~ hours



ugv-uav_ab.txt · Last modified: 2017/05/04 14:10 by blakehament