A Case Study Interview with the Queensland University of Technology

 
 

The Queensland University of Technology (QUT) Centre for Robotics (QCR) is a team of approximately 100 researchers, including 15 academics, around ten post-docs, some research engineers and 50 PhD students. Building on over a decade of QUT and federal strategic investment, QCR conducts world-leading research in intelligent robotics by translating fundamental research into real-world outcomes that benefit industry and society. QCR experts are leaders in education, training and development of talent in robotics and autonomous systems, and provide leadership in technological policy development and societal debate. QUT has been named Australia’s top robotics group by “The Australian” newspaper for three years running.

*All images were taken from the Queensland University of Technology.

(L-R) Professor Michael Milford, Dr Tobias Fischer and Stephen Hausler with MiRo.

We interviewed Dr Tobias Fischer, a Lecturer (Assistant Professor) at QCR to learn more about his experience of using MiRo for his teach and repeat robust navigation research.

The MiRo was a great platform to experiment with – it’s small, robust, easy to program and had precisely the sensors we wanted.
— Dr Tobias Fischer
 

Tobias, can you tell us about you and your research team?

Dr Tobias Fischer (www.tobiasfischer.info)

I am Tobias Fischer, a Lecturer (Assistant Professor) in the Queensland University of Technology (QUT) Centre for Robotics (full biography below). The lead author of the work that I am describing is a former student of ours, Dominic Dall’Osto, now a Master’s student in Neural Systems and Computation at the University of Zurich and ETH Zurich. The work was also co-authored by Professor Michael Milford, ARC Laureate Fellow and Joint Director of the Centre. The paper entitled “Fast and Robust Bio-inspired Teach and Repeat Navigation” was published in the IEEE/RSJ Conference on Intelligent Robots and Systems 2021.

 

Can you tell us about your project and what you were trying to learn?

Our research focuses on the areas of perception and localisation, or positioning. Having a robust and accurate understanding of where an autonomous mobile system like a robot is located in the world is crucial for a multitude of potential applications. However, we don’t yet have access to a general-purpose solution for robust navigation that works in the many different environments where a robot can possibly be deployed.

In this project, we focused on the specific route repeating navigation task, which is useful for many applications like repeated infrastructure inspection, item delivery, a robotic museum tour guide, an interplanetary rover that repeatedly collects samples, or transport of goods in a warehouse.

In particular, we wanted to enable low-cost robotics by using low-cost sensors, namely a standard RGB camera and the simple motion sensors that almost all robots have inbuilt (using, for example, wheel encoders and inertial measurement units). Furthermore, we also wanted to run the algorithm on platforms with low-cost computers on-board, like MiRo and other smaller robotic platforms. Most other work in this area has used expensive sensors like Lidars and have often required sophisticated computers.

 

Why did you use MiRo for your project?

The MiRo was a great platform to experiment with – it’s small, robust (it can drive into an obstacle without having to worry too much about damaging anything), easy to program (with an excellent ROS interface), and had precisely the sensors we wanted. Its animal-like appearance also nicely demonstrated the motivation of using a bio-inspired approach. And it’s super cute – all students and visitors immediately like it!

 

How did you go about delivering your project?

To deliver our project, we took inspiration from the animal kingdom. Insects like ants navigate long routes effortlessly despite having relatively small brains and not very good eyes. Instead of using high-resolution images, we use low-resolution images (as low as 23x8 for an indoor route and 57x22 for outdoor routes) that are normalised to reduce the effect of the lighting conditions. Using low-resolution images reduces both the memory to store the route and the time it takes to compare incoming images with those stored in the memory.

We store these images alongside the motion information when teaching the robot a new route. When repeating the route, we “replay” the stored motion. However, the motion data is unreliable, so we send correction signals based on the incoming images. We calculate the horizontal change between the expected image (which is stored in the memory) and the incoming image using a cross-correlation metric. In addition, we estimate an “along-path” correction in case the incoming image was captured between two images in the memory. Importantly, our geometric formulation allows us to decouple visual information (which is relatively infrequent) from odometry information (which is of high frequency).

 

Did you experience any challenges? If so, what were they?

It took us quite some time to get all the parameters right!

 

What were the results? 

We deployed our method on the MiRo and the Clearpath Jackal platforms. We ran our teach and repeat framework totalling more than 6000 metres, both indoors and outdoors. We have shown that our approach outperforms the state-of-the-art in challenging situations, including noisy motion information or low-resolution images, while being significantly faster to process.

We are particularly proud of two findings: Firstly, we were able to repeat a route after a four month delay from when it was recorded. We think that this is impressive, as the season and some structural conditions had significantly changed in this long time. Secondly, our project was the first to demonstrate that a teach run can be transferred to a completely different robot! Despite a different camera with perspective shift and different motion information between the robots, the Jackal successfully repeated a run taught to MiRo. This demonstrates the high robustness of our approach to viewpoint change and odometry information.

 

What did you like about using MiRo? 

MiRo also came in handy during home office, as Dom could take it home and continue to do experiments there. MiRo driving around was also a source of great entertainment for Dom’s pet cat.

 

What are the next steps or further developments from this project? 

One extension is to track the system’s confidence when it follows a path and trigger a recovery manoeuvre if this confidence drops too low. This could also be used to avoid obstacles. Another idea is to do teach and repeat with multiple robots at the same time. This will require collaborative schemes between the different robots.


Biography

Dr Tobias Fischer conducts interdisciplinary research at the intersection of intelligent robotics, computer vision, and computational cognition. His main goal is to develop high-performing, bio-inspired computer vision algorithms that simultaneously examine animals/humans and robots’ perceptional capabilities.

Since January 2022, Tobias has been a Lecturer in the QUT Centre for Robotics. He joined QUT as an Associate Investigator and Research Fellow working with Professor Michael Milford in January 2020. Previously, Dr Fischer was a postdoctoral researcher in the Personal Robotics Lab at Imperial College London. He received a PhD from Imperial College in January 2019. Tobias’ thesis was awarded the UK Best Thesis in Robotics Award 2018 and the Eryl Cadwaladr Davies Award for the best thesis in Imperial’s EEE Department in 2017-2018. He previously received an M.Sc. degree (distinction) in Artificial Intelligence from The University of Edinburgh in 2014 and a B.Sc. degree in Computer Engineering from Ilmenau University of Technology, Germany, in 2013. His works have attracted two best poster awards, one best paper award, and he is the senior author of the winning submission to the Facebook Mapillary Place Recognition Challenge 2020.