Lunar Rover Navigation: Impact of Illumination Conditions on AI and Human Perception of Crater Sizes

148337-Thumbnail Image.png
Description

When rover mission planners are laying out the path for their rover, they use a combination of stereo images and statistical and geological data in order to plot a course for the vehicle to follow for its mission. However, there

When rover mission planners are laying out the path for their rover, they use a combination of stereo images and statistical and geological data in order to plot a course for the vehicle to follow for its mission. However, there is a lack of detailed images of the lunar surface that indicate the specific presence of hazards, such as craters, and the creation of such crater maps is time-consuming. There is also little known about how varying lighting conditions caused by the changing solar incidence angle affects perception as well. This paper addresses this issue by investigating how varying the incidence angle of the sun affects how well the human and AI can detect craters. It will also see how AI can accelerate the crater-mapping process, and how well it performs relative to a human annotating crater maps by hand. To accomplish this, several sets of images of the lunar surface were taken with varying incidence angles for the same spot and were annotated both by hand and by an AI. The results are observed, and then the AI performance was rated by calculating its resulting precision and recall, considering the human annotations as being the ground truth. It was found that there seems to be a maximum incidence angle for which detect rates are the highest, and that, at the moment, the AI’s detection of craters is poor, but it can be improved. With this, it can inform future and more expansive investigations into how lighting can affect the perception of hazards to rovers, as well as the role AI can play in creating these crater maps.

Date Created
2021-05
Agent

Coordinated Navigation and Localization of an Autonomous Underwater Vehicle Using an Autonomous Surface Vehicle in the OpenUAV Simulation Framework

158648-Thumbnail Image.png
Description
The need for incorporating game engines into robotics tools becomes increasingly crucial as their graphics continue to become more photorealistic. This thesis presents a simulation framework, referred to as OpenUAV, that addresses cloud simulation and photorealism challenges in academic and

The need for incorporating game engines into robotics tools becomes increasingly crucial as their graphics continue to become more photorealistic. This thesis presents a simulation framework, referred to as OpenUAV, that addresses cloud simulation and photorealism challenges in academic and research goals. In this work, OpenUAV is used to create a simulation of an autonomous underwater vehicle (AUV) closely following a moving autonomous surface vehicle (ASV) in an underwater coral reef environment. It incorporates the Unity3D game engine and the robotics software Gazebo to take advantage of Unity3D's perception and Gazebo's physics simulation. The software is developed as a containerized solution that is deployable on cloud and on-premise systems.

This method of utilizing Gazebo's physics and Unity3D perception is evaluated for a team of marine vehicles (an AUV and an ASV) in a coral reef environment. A coordinated navigation and localization module is presented that allows the AUV to follow the path of the ASV. A fiducial marker underneath the ASV facilitates pose estimation of the AUV, and the pose estimates are filtered using the known dynamical system model of both vehicles for better localization. This thesis also investigates different fiducial markers and their detection rates in this Unity3D underwater environment. The limitations and capabilities of this Unity3D perception and Gazebo physics approach are examined.
Date Created
2020
Agent