Last week, I encountered an error where the desktop GUI would crash after only receiving one command from HoloLens. I was able to fix the error after some debugging. As I expected, the problem originated from asynchronous implementation. As of right now, the HoloLens is capable of sending simple commands such as sit, stand, directional movement. The application is also responsive to voice commands.
After this first step, I started working on creating the point control system for Spot on HoloLens. The most important challenge would be to find a solution to translate a point in HoloLens' view to a location that Spot can understand. Both HoloLens and Spot have their own world coordinate system. To complicate the transformation further, Unity uses a left-handed coordinate system while Spot uses a right-handed system. The originally plan was to use a fiducial marker to use as common frame for transformation. However, this requires both HoloLens and Spot to accurately detect the marker. After exploring different marker detection library and test their performance. I concluded that this system would be too unreliable.
The plan I decided to was to have an initialization phase where the user place a virtual marker directly on Spot. The marker's location in HoloLens' frame and Spot's currently position in its world frame will be used for transformation. I worked on the HoloLens application to enable a user to place virtual coordinate frame on the mesh created from HoloLens' spacial awareness feature. I implemented some functions to transform between the Spot and HoloLens' frame. The next step would be to implement function to receive robot state from Spot.
In addition, Matt asked me to give a teaching/presentation to a group of software engineers at Tesla. The topic focused on Spot API and specifically on the gRPC framework that API is built from. I explained gRPC framework and provided example of how it was used in my development with Spot. I received some good feedback from the presentation. The other software developers/engineers are interested in what I am doing with Spot.
Finally, we received news from BD that they are finally planning to release their software update next week. We have been waiting for this software update since the training session in Boston almost 3 months ago. BD promised that they will release autonomous navigation feature during this update. In addition, the option to configure payload weight and location to improve walking stability will also be released. To truly realize automation to achieve better efficiency and cost-saving. The autonomous navigation ability is paramount.
Next week, I will continue working on creating HoloLens' control system for Spot. I will need to test first if HoloLens can successfully receive data stream from Spot through WebSocket. Afterward, I will need to experiment to determine if the frame transformation that I proposed can successfully transform a coordinate between Spot and HoloLens' individual global frame. I am looking forward to explore BD's new update. Hopefully, the autonomous navigation ability can achieve the objective that we envisioned.