User Tools

Site Tools


reu_journal7

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
reu_journal7 [2019/07/21 23:14] faustovegareu_journal7 [2019/07/23 12:51] (current) faustovega
Line 9: Line 9:
 ==== Photo of the Week ==== ==== Photo of the Week ====
  
-{{ :vega:img_0875.jpg?upright }}+{{ :vega:img_0919.jpg?upright }}
  
-This is a picture of another robot that they have in the lab. It is the Nao robot which is similar to the Darwin OP that we have in the lab. In my opinion, the Darwin is better because of the MX-28 dynamixel actuators. Well I haven't seen any demo's with the Nao because none of them are currently working. They usually just turn it on during demo's and it has a default program that makes it keep eye contact with anyone in range. Something that I have noticed is that during the summer, the Socially Assistive Robotics Group Lab (SARG) has tours every week. Usually the tours consist of orientation students or summer camp students. Yet, the do not have any demos prepared for the students which makes half of them uninterested during the tour. I believe that DASL has a good set of demos that can impress students that want to get into robotics. +This is a picture of another robot that they have in the lab. It is the Nao robot which is similar to the Darwin OP that we have in the lab. In my opinion, the Darwin is better because of the MX-28 dynamixel actuators. Well I haven't seen any demo's with the Nao because none of them are currently working. They usually just turn it on during demo's and it has a default program that makes it keep eye contact with anyone in range. Something that I have noticed is that during the summer, the Socially Assistive Robotics Group Lab (SARG) has tours every week. Usually the tours consist of orientation students or summer camp students. Yet, they do not have any demos prepared for the students which makes half of them uninterested during the tour. I believe that DASL has a good set of demos that can impress students that want to get into robotics. 
  
 ==== Weekly Progress/ Next Week Plans ==== ==== Weekly Progress/ Next Week Plans ====
- 
-I finished the service and the subscriber to the bounding boxes. I have also learned to access the data from the laser scanner array. Therefore, the next step is to publish this angle to a laser scanner node that accesses the distance value at that angle which will let the robot localize the obstacles in its path. I have also been working on the paper deliverable that is required from the REU. So far, I have the intro, related work, and methods sections complete. The goal for this week to to get the Pioneer running with a navigation stack (or if there is not an existing one on ROS, then I'd have to configure an existing nav stack of another robot). Another goal is to compile everything into a launch file.  
  
 This week I have created a program that creates a marker on RVIZ on the distance of picture frames. The YOLO object detection model was trained by my Phd mentor to track people and picture frames. This will let the robot know the distance of the frames in an art gallery scenario. My mentor also wanted to do a context classifier on the robot as he trained a model to decipher between hallways and an art gallery. Yet, I was having problems installing Keras (the machine learning program they use over here) on my computer. There was something wrong with the pip install and I did not want to update it because I heard it messes up existing programs. I also looked into the navigation stack of the robot and how to set it up. I watched several tutorial videos to get introduced to all the nodes amcl, move base, map server, ect as well as all the maps associated with the navigation. I configured the launch files and got a default map and urdf model of the robot to show up on rviz. Now the step that is left is to write the transforms between the robot, laser, and the map. Another goal is to create a new map using the gmapping package.  This week I have created a program that creates a marker on RVIZ on the distance of picture frames. The YOLO object detection model was trained by my Phd mentor to track people and picture frames. This will let the robot know the distance of the frames in an art gallery scenario. My mentor also wanted to do a context classifier on the robot as he trained a model to decipher between hallways and an art gallery. Yet, I was having problems installing Keras (the machine learning program they use over here) on my computer. There was something wrong with the pip install and I did not want to update it because I heard it messes up existing programs. I also looked into the navigation stack of the robot and how to set it up. I watched several tutorial videos to get introduced to all the nodes amcl, move base, map server, ect as well as all the maps associated with the navigation. I configured the launch files and got a default map and urdf model of the robot to show up on rviz. Now the step that is left is to write the transforms between the robot, laser, and the map. Another goal is to create a new map using the gmapping package. 
Line 24: Line 22:
 https://github.com/simingl/bridgeinspection https://github.com/simingl/bridgeinspection
  
-I also learned that github is a huge resource for code documentation and we should start implementing it in the lab. It will reduce the problem of losing code if a robot ends up failing. It is also a good source of documentation for anyone that wants to develop their own project off of existing code.+I also learned that github is a huge resource for code documentation and we should start implementing it in the lab. It will reduce the problem of losing code if a robot ends up failing. It is also a good source of documentation for anyone that wants to develop their own project off of existing code. It is also useful for other people to help debug and collaborate on a project
  
  
 +{{ :vega:img_0920.jpg?upright }}
 ==== What I Learned about Myself ==== ==== What I Learned about Myself ====
  
Line 35: Line 34:
 ==== New Person That I Met ==== ==== New Person That I Met ====
  
-did not have chance to meet anyone new this week due to most of it being spent in Las Vegas+This week met Will, another REU student. He is studying mechatronics at the University of North Carolina Asheville. He told me about his work with robotic arm to play jenga that they are working on. It sounded like an interesting project that incorporated ROS, machine learning, and computer vision in one
reu_journal7.1563776085.txt.gz · Last modified: by faustovega