User Tools

Site Tools


ugv-uav_ab

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ugv-uav_ab [2017/04/21 08:33] blakehamentugv-uav_ab [2017/05/04 14:10] (current) – [7. Multi-rotor drone tutorial: systems, mechanics, control and state estimation] blakehament
Line 18: Line 18:
 ==== 1. Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction ==== ==== 1. Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction ====
  
-[[rpg.ifi.uzh.ch/docs/IROS13_Forster_air-ground.pdf|Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction]]\\ +[[http://ieeexplore.ieee.org/document/6696924/|Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction]]\\ 
  
 Publisher: IROS 2013\\  Publisher: IROS 2013\\ 
Line 209: Line 209:
   - The key problem w/ UGV-UAV cooperation is data registration and fusion   - The key problem w/ UGV-UAV cooperation is data registration and fusion
   - Time constraints are salient in designing the algo's   - Time constraints are salient in designing the algo's
-  - Use prior knowledge like motion projections and geometric properties to improve data fusion // +  - Use prior knowledge like motion projections and geometric properties to improve data fusion \\
    
 **Blake liked this paper because:**  **Blake liked this paper because:** 
Line 231: Line 231:
  
  
-**[[ieeexplore.ieee.org/iel7/7478842/7487087/07487146.pdf|PLANNING FOR A GROUND-AIR ROBOTIC SYSTEM WITH COLLABORATIVE LOCALIZATION]]**\\ +**[[http://ieeexplore.ieee.org/document/7487146/|PLANNING FOR A GROUND-AIR ROBOTIC SYSTEM WITH COLLABORATIVE LOCALIZATION]]**\\ 
 Publisher: ICRA 2016\\  Publisher: ICRA 2016\\ 
 Keywords (platform, field of research, algorithm or approach/methodologies, more details ): autonomous aerial vehicles;PAD planner;SLC planner;UGV-UAV team operating indoors;collaborative localization;controller-based motion primitives;ground-air robotic system;high-quality localization information;payload capacity;planning adaptive dimensionality;robust navigation capabilities;state lattice planner;unmanned aerial vehicles;unmanned ground vehicles;visual features;Collaboration;Lattices;Planning;Robot sensing systems;Trajectory \\ Keywords (platform, field of research, algorithm or approach/methodologies, more details ): autonomous aerial vehicles;PAD planner;SLC planner;UGV-UAV team operating indoors;collaborative localization;controller-based motion primitives;ground-air robotic system;high-quality localization information;payload capacity;planning adaptive dimensionality;robust navigation capabilities;state lattice planner;unmanned aerial vehicles;unmanned ground vehicles;visual features;Collaboration;Lattices;Planning;Robot sensing systems;Trajectory \\
Line 247: Line 247:
  
  
 +**This paper describes:** A state lattice planner using controller-based motion primitives (SLC) that uses planning with adaptive dimensionality (PAD)
 +\\  
 +
 +**The authors present (simulations, experiments, theory):** experiments using a state lattice planner using controller-based motion primitives (SLC) that uses planning with adaptive dimensionality (PAD)
 + \\
 +
 +**From this presentation, the paper concludes
 +that:** success rates and trial stats showing the SLC PAD approach has a high rate of success and quick processing times\\
 +
 + 
 +**From the state-of-the-art, the paper identifies challenges in:** real-time planning for heterogeneous robot teams in which the robots do not need to travel in formation
 +\\
 +
 +**The paper addresses these challenges by:** This paper proposes PAD such that only the relevant dimensions at a given point are considered. Ex: When moving a piano inside from the street, planning orientation is irrelevant in the drive-way but essential once you get to the doorway.
 +\\
 +
 +**The results of this approach are:** superior success rates and times vs. existing methods \\
 +
 + 
 +**The paper presents the following theoretical principles:** 
 +  - State Lattice Controller (SLC)
 +  - Motion Primitives
 +  - Planning with Adaptive Dimensionality (PAD)
 +\\
 +**The principles are explained (choose: well/fairly/poorly):** fairly \\
 +
 +**For example (fill-in-the-blank e.g. the equations, graphs, figures),:** Table 1 **show the (choose: correct/questionable) application of the principles:** correct, but many of the figures are not very educational or interesting \\
 +   
 +**From the principles and results, the paper concludes:** 
 +  - Existing methods of localization for UGV-UAV teams are lacking because of slow processing times and poor assumptions made by planners
 +  - PAD allows for much faster processing times
 +\\  
 + 
 +**Blake liked this paper because:** (fill-in-the-blank with at least 3 reasons if possible).\\
 +
 +**I disliked this paper because:** the authors didn't provide a toolkit with their algo's ;)  \\
 +**I would have liked to see** the same work done without assuming any prior knowledge of the map \\
 +
 +**Three things I learned from this paper were:** \\
 +  - state-based planning
 +  - planning with adaptive dimensionality
 +  - controlling with motion primitives
 +
 +**Time reading and annotating:** ~ 2 hours
 +
 +
 +---
 +\\
 +==== 5. A Tutorial on Visual Servo Control ====
 +
 +
 +**[[http://www.cs.jhu.edu/~hager/Public/Publications/TutorialTRA96.pdf|A Tutorial on Visual Servo Control]]**\\ 
 +**Publisher:** ITRA 1996 \\ 
 +**Keywords (platform, field of research, algorithm or approach/methodologies, more details ):** Jacobian matrices;correlation methods;feature extraction;feedback;image representation;motion control;optical tracking;robot dynamics;robot vision;servomechanisms;computer vision;coordinate transformations;correlation-based methods;feedback;image Jacobian;image feature tracking;image formation process;image-based systems;position-based system;robotic manipulators;tutorial;velocity representation;visual servo control;Control systems;Costs;Manipulators;Manufacturing;Robot control;Robot sensing systems;Robot vision systems;Servosystems;Tutorial;Visual servoing \\
 +**Bibtex:**\\
 +@ARTICLE{538972, 
 +author={S. Hutchinson and G. D. Hager and P. I. Corke}, 
 +journal={IEEE Transactions on Robotics and Automation}, 
 +title={A tutorial on visual servo control}, 
 +year={1996}, 
 +volume={12}, 
 +number={5}, 
 +pages={651-670}, 
 +keywords={Jacobian matrices;correlation methods;feature extraction;feedback;image representation;motion control;optical tracking;robot dynamics;robot vision;servomechanisms;computer vision;coordinate transformations;correlation-based methods;feedback;image Jacobian;image feature tracking;image formation process;image-based systems;position-based system;robotic manipulators;tutorial;velocity representation;visual servo control;Control systems;Costs;Manipulators;Manufacturing;Robot control;Robot sensing systems;Robot vision systems;Servosystems;Tutorial;Visual servoing}, 
 +doi={10.1109/70.538972}, 
 +ISSN={1042-296X}, 
 +month={Oct},}
 +
 +
 +\\
 +**This paper describes:** A taxonomy of and instructions for visual servoing (VS)\\  
 +
 +**The authors present (simulations, experiments, theory):** A text tutorial containing the equations that govern the various approaches to VS\\
 +
 +**From this presentation, the paper concludes
 +that:** A disclaimer that the paper presents a fundamental introduction, and readers should follow up with cited papers relevant to the VS they seek to implement.\\
 +
 + 
 +**From the state-of-the-art, the paper identifies challenges in:** processing times and environments with low visibility\\
 +
 +**The paper addresses these challenges by:** image-based rather than position-based VS; better camera positioning\\
 +
 +**The results of this approach are:** referenced in the 80+ citations in which they were implemented\\
 +
 + 
 +**The paper presents the following theoretical principles:** 
 +  - End point open vs. closed loop control
 +  - Position vs. image based control
 +  - Dynamic look-and-move vs. direct visual servo
 +
 +\\
 +**The principles are explained (choose: well/fairly/poorly):** well \\
 +
 +**For example (fill-in-the-blank e.g. the equations, graphs, figures),:** Figs 3-6 **show the (choose: correct/questionable) application of the principles:** correct and very clear comparison of various VS architectures \\
 +   
 +**From the principles and results, the paper concludes:** 
 +  - If the system is tracking movement of a target with known movement in cartesian coordinates, position-based tracking makes sense
 +  - Otherwise, image-based tracking works better because it can be done independently of errors in robot kinematics or camera calibration
 +
 +\\ 
 + 
 +**Blake liked this paper because:** 
 +  - It described relevant image convolution techniques
 +  - It gave a concise but broad overview of the VS field
 +  - It helped me refine the VS architecture most suited to UGV-UAV docking
 +\\
 +**I disliked this paper because:** I had to stop very frequently to look up computer vision or controls vocabulary; \\
 +**I would have liked to see** more intermediary steps articulated \\
 +
 +**Three things I learned from this paper were:** \\
 +  - Taxonomy of VS methods
 +  - Several specific image matrix operations that are essential to VS
 +  - Image-based VS tends to be more accurate, especially for the types of applications I will be implementing
 +
 +\\
 +**Time reading and annotating:** ~ 5 hours
 +
 +
 +
 +---
 +\\
 +
 +==== 6. A visual servoing docking approach for marsupial robotic system====
 +
 +
 +**[[http://ieeexplore.ieee.org.ezproxy.library.unlv.edu/document/6896395/|A visual servoing docking approach for marsupial robotic system]]**\\ 
 +**Publisher:** IEEE 2014 \\ 
 +**Keywords (platform, field of research, algorithm or approach/methodologies, more details ):** cameras;feedback;mobile robots;multi-robot systems;robot vision;virtual machines;visual servoing;adistance state;aiming state;angular camera;around state;atangent state;blind state;child robot;decision-making unit;docking heading orientation;docking motion guide;image feature feedback;image-infor virtual machine;marsupial robotic system;parking state;pose refresher;rotational DOF;simulation platform design;task modelling;transform conditions;vertical V-shaped visual benchmark design;visual servoing docking approach;Benchmark testing;Cameras;Decision making;Robot kinematics;Robot vision systems;Virtual machining;Marsupial Robotic system;docking;image-infor virtual machine;visual servoing \\
 +**Bibtex:**\\
 +@INPROCEEDINGS{6896395, 
 +author={P. Zhao and Z. Cao and L. Xu and C. Zhou and D. Xu}, 
 +booktitle={Proceedings of the 33rd Chinese Control Conference}, 
 +title={A visual servoing docking approach for marsupial robotic system}, 
 +year={2014}, 
 +pages={8321-8325}, 
 +keywords={cameras;feedback;mobile robots;multi-robot systems;robot vision;virtual machines;visual servoing;adistance state;aiming state;angular camera;around state;atangent state;blind state;child robot;decision-making unit;docking heading orientation;docking motion guide;image feature feedback;image-infor virtual machine;marsupial robotic system;parking state;pose refresher;rotational DOF;simulation platform design;task modelling;transform conditions;vertical V-shaped visual benchmark design;visual servoing docking approach;Benchmark testing;Cameras;Decision making;Robot kinematics;Robot vision systems;Virtual machining;Marsupial Robotic system;docking;image-infor virtual machine;visual servoing}, 
 +doi={10.1109/ChiCC.2014.6896395}, 
 +month={July},}
 +
 +
 +\\
 +**This paper describes:** A docking method for a child robot loading into a compartment in a mother robot\\  
 +
 +**The authors present (simulations, experiments, theory):** task model for docking, algo, and simulated results \\
 +
 +**From this presentation, the paper concludes
 +that:** the simulated results confirm the validity of the docking approach and it's anti-interrupt ability\\
 +
 + 
 +**From the state-of-the-art, the paper identifies challenges in:** Controls for and implementation of marsupial robots\\
 +
 +**The paper addresses these challenges by:** applying standard color and geometry triggered VS controls to docking of marsupial robots\\
 +
 +**The results of this approach are:** Quick docking times, robust performance even with unexpected perturbations in the robots motion/rotation from the researchers\\
 +
 + 
 +**The paper presents the following theoretical principles:** 
 +  - Marsupial robotics
 +  - Visual Servoing
 +  - Decision-making in Different States
 +\\
 +**The principles are explained (choose: well/fairly/poorly):** well \\
 +
 +**For example (fill-in-the-blank e.g. the equations, graphs, figures),:** Fig. 4 **show the (choose: correct/questionable) application of the principles:**correct application of the decision-making algo \\
 +   
 +**From the principles and results, the paper concludes:** 
 +  - Future work will focus on retrieval of child robots
 +
 +\\ 
 + 
 +**Blake liked this paper because:** 
 +  - Almost identical VS docking strategy to my method for UGV docking in a box suspended from a UAV
 +  - They showed their rotation and transformation matrices which will be a good check for my future work if I pursue the box method
 +
 +\\
 +**I disliked this paper because:** Although I am happy for the walkthrough of their project as I might compare my results at various stages, I am not sure they pushed any borders of knowledge ; \\
 +**I would have liked to see** More discussion of the simulation \\
 +
 +**Three things I learned from this paper were:** \\
 +  - Vocabulary for the stages in docking
 +  - "V" approach to docking with VS
 +  - Pose refreshment in simulation-- could apply similar equations to VR
 +
 +\\
 +**Time reading and annotating:** ~ 15 min
 +
 +
 +---
 +\\
 +==== 7. Multi-rotor drone tutorial: systems, mechanics, control and state estimation ====
 +
 +
 +**[[https://link.springer.com/article/10.1007/s11370-017-0224-y|Multi-rotor drone tutorial: systems, mechanics, control and state estimation]]**\\ 
 +**Publisher:** Intelligent Service Robotics 2017 \\ 
 +**Keywords (platform, field of research, algorithm or approach/methodologies, more details ):** Components · Control · Modeling · Multi-rotor drone · Sensor fusion \\
 +**Bibtex:**\\
 +@Article{Yang2017,
 +author="Yang, Hyunsoo
 +and Lee, Yongseok
 +and Jeon, Sang-Yun
 +and Lee, Dongjun",
 +title="Multi-rotor drone tutorial: systems, mechanics, control and state estimation",
 +journal="Intelligent Service Robotics",
 +year="2017",
 +volume="10",
 +number="2",
 +pages="79--93",
 +issn="1861-2784",
 +doi="10.1007/s11370-017-0224-y",
 +url="http://dx.doi.org/10.1007/s11370-017-0224-y"
 +}
 +
 +\\
 **This paper describes:** \\   **This paper describes:** \\  
  
Line 262: Line 475:
  
    
-**The paper presents the following theoretical principles:** (1) (fill-in-the-blank); +**The paper presents the following theoretical principles:**  
-(2) (fill-in-the-blank);.... ; and (n) (fill-in-the-blank).   +  Ordered List Item 
- +     
-**The principles are explained (choose: well/fairly/poorly):** fairly \\+\\ 
 +**The principles are explained (choose: well/fairly/poorly):**  \\
  
 **For example (fill-in-the-blank e.g. the equations, graphs, figures),:**  **show the (choose: correct/questionable) application of the principles:** \\ **For example (fill-in-the-blank e.g. the equations, graphs, figures),:**  **show the (choose: correct/questionable) application of the principles:** \\
        
-**From the principles and results, the paper concludes:** (fill-in-the-blank); +**From the principles and results, the paper concludes:**  
-(2) (fill-in-the-blank);.... ; and (n) (fill-in-the-blank).  +  -  
 + 
 +\\ 
    
-**Blake liked this paper because:** (fill-in-the-blank with at least reasons if possible).\\ +**Blake liked this paper because:**  
- +  
-**I disliked this paper because:** (fill-in-the-blank); \\ +  
-**I would have liked to see** (fill-in-the-blank). \\+  - 3 
 +\\ 
 +**I disliked this paper because:** ; \\ 
 +**I would have liked to see**  \\
  
 **Three things I learned from this paper were:** \\ **Three things I learned from this paper were:** \\
-(1) (fill-in-the-blank); \\ +  - 1 
-(2) (fill-in-the-blank); \\ +  2 
-and (3) (fill-in-the-blank).\\+  - 3
  
-**Time reading and annotating:** ~ 1.5 hours+\\ 
 +**Time reading and annotating:** ~  hours
  
  
 --- ---
 +\\
 +---
 +\\
 +==== . ====
  
-**3.[[rpg.ifi.uzh.ch/docs/IROS13_Forster_air-ground.pdf|LOCALIZATION, MAPPING, MONOCULAR DENSE RECONSTRUCTION]]**\\  
-Publisher: IROS 2013\\  
-Keywords (platform, field of research, algorithm or approach/methodologies, more details ): image reconstruction;mobile robots;position measurement;robot vision;3D map registration;3D reconstruction;MAV monocular camera;Monte Carlo localization;air-ground localization;depth sensor;ground robot;iterative pose refinement;live dense reconstruction;map augmentation;micro aerial vehicle;monocular dense reconstruction;position estimation;sensors;vantage points;visual feature matching;Cameras;Robot kinematics;Simultaneous localization and mapping;Three-dimensional displays \\ 
-** 
-Bibtex:**\\ 
  
-@INPROCEEDINGS{6696924,  +**[[|]]**\\  
-author={C. Forster and M. Pizzoli and D. Scaramuzza},  +**Publisher:** \\  
-booktitle={2013 IEEE/RSJ International Conference on Intelligent Robots and Systems} +**Keywords (platformfield of researchalgorithm or approach/methodologiesmore details ):**  \\ 
-title={Air-ground localization and map augmentation using monocular dense reconstruction} +**Bibtex:**\\
-year={2013},  +
-pages={3971-3978},  +
-keywords={image reconstruction;mobile robots;position measurement;robot vision;3D map registration;3D reconstruction;MAV monocular camera;Monte Carlo localization;air-ground localization;depth sensor;ground robot;iterative pose refinement;live dense reconstruction;map augmentation;micro aerial vehicle;monocular dense reconstruction;position estimation;sensors;vantage points;visual feature matching;Cameras;Robot kinematics;Simultaneous localization and mapping;Three-dimensional displays},  +
-doi={10.1109/IROS.2013.6696924},  +
-ISSN={2153-0858},  +
-month={Nov},}+
  
  
 +
 +\\
 **This paper describes:** \\   **This paper describes:** \\  
  
Line 320: Line 535:
  
    
-**The paper presents the following theoretical principles:** (1) (fill-in-the-blank); +**The paper presents the following theoretical principles:**  
-(2) (fill-in-the-blank);.... ; and (n) (fill-in-the-blank) +  Ordered List Item 
 +     
 +\\ 
 +**The principles are explained (choose: well/fairly/poorly):**  \\
  
-**The principles are explained (choose: well/fairly/poorly):** fairly \\+**For example (fill-in-the-blank e.g. the equations, graphs, figures),:**  **show the (choose: correct/questionable) application of the principles:** \\ 
 +    
 +**From the principles and results, the paper concludes:**  
 +  -  
 + 
 +\\  
 +  
 +**Blake liked this paper because:**  
 +  - 1 
 +  - 2 
 +  - 3 
 +\\ 
 +**I disliked this paper because:** ; \\ 
 +**I would have liked to see**  \\ 
 + 
 +**Three things I learned from this paper were:** \\ 
 +  - 1 
 +  - 2 
 +  - 3 
 + 
 +\\ 
 +**Time reading and annotating:** ~  hours 
 + 
 + 
 +--- 
 +\\ 
 +--- 
 +\\ 
 +==== . ==== 
 + 
 + 
 +**[[|]]**\\  
 +**Publisher:** \\  
 +**Keywords (platform, field of research, algorithm or approach/methodologies, more details ):**  \\ 
 +**Bibtex:**\\ 
 + 
 + 
 + 
 +\\ 
 +**This paper describes:** \\   
 + 
 +**The authors present (simulations, experiments, theory):** \\ 
 + 
 +**From this presentation, the paper concludes 
 +that:** \\ 
 + 
 +  
 +**From the state-of-the-art, the paper identifies challenges in:** \\ 
 + 
 +**The paper addresses these challenges by:** \\ 
 + 
 +**The results of this approach are:** \\ 
 + 
 +  
 +**The paper presents the following theoretical principles:**  
 +  - Ordered List Item 
 +  -    
 +\\ 
 +**The principles are explained (choose: well/fairly/poorly):**  \\
  
 **For example (fill-in-the-blank e.g. the equations, graphs, figures),:**  **show the (choose: correct/questionable) application of the principles:** \\ **For example (fill-in-the-blank e.g. the equations, graphs, figures),:**  **show the (choose: correct/questionable) application of the principles:** \\
        
-**From the principles and results, the paper concludes:** (fill-in-the-blank); +**From the principles and results, the paper concludes:**  
-(2) (fill-in-the-blank);.... ; and (n) (fill-in-the-blank).  +  -  
 + 
 +\\ 
    
-**Blake liked this paper because:** (fill-in-the-blank with at least reasons if possible).\\+**Blake liked this paper because:**  
 +  
 +  
 +  - 3 
 +\\ 
 +**I disliked this paper because:** ; \\ 
 +**I would have liked to see**  \\
  
-**I disliked this paper because:** (fill-in-the-blank); \\ +**Three things learned from this paper were:** \\ 
-**I would have liked to see** (fill-in-the-blank). \\+  - 1 
 +  - 2 
 +  - 3 
 + 
 +\\ 
 +**Time reading and annotating:** ~  hours 
 + 
 + 
 +--- 
 +==== . ==== 
 + 
 + 
 +**[[|]]**\\  
 +**Publisher:** \\  
 +**Keywords (platform, field of research, algorithm or approach/methodologies, more details ):**  \\ 
 +**Bibtex:**\\ 
 + 
 + 
 + 
 +\\ 
 +**This paper describes:** \\   
 + 
 +**The authors present (simulations, experiments, theory):** \\ 
 + 
 +**From this presentation, the paper concludes 
 +that:** \\ 
 + 
 +  
 +**From the state-of-the-art, the paper identifies challenges in:** \\ 
 + 
 +**The paper addresses these challenges by:** \\ 
 + 
 +**The results of this approach are:** \\ 
 + 
 +  
 +**The paper presents the following theoretical principles:**  
 +  - Ordered List Item 
 +  -    
 +\\ 
 +**The principles are explained (choose: well/fairly/poorly):**  \\ 
 + 
 +**For example (fill-in-the-blank e.g. the equations, graphs, figures),:**  **show the (choose: correct/questionable) application of the principles:** \\ 
 +    
 +**From the principles and results, the paper concludes:**  
 +  -  
 + 
 +\\  
 +  
 +**Blake liked this paper because:**  
 +  - 1 
 +  - 2 
 +  - 3 
 +\\ 
 +**I disliked this paper because:** ; \\ 
 +**I would have liked to see**  \\
  
 **Three things I learned from this paper were:** \\ **Three things I learned from this paper were:** \\
-(1) (fill-in-the-blank); \\ +  - 1 
-(2) (fill-in-the-blank); \\ +  2 
-and (3) (fill-in-the-blank).\\+  - 3
  
-**Time reading and annotating:** ~ 1.5 hours+\\ 
 +**Time reading and annotating:** ~  hours
  
  
 --- ---
 +\\
 +---
 +\\
ugv-uav_ab.1492788828.txt.gz · Last modified: by blakehament