Mediating Human-Robot Collaboration through Mixed Reality Cues

155401-Thumbnail Image.png
Description
This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions

This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.
Date Created
2017
Agent

Bi-manual learning for a basketball playing robot

155071-Thumbnail Image.png
Description
Sports activities have been a cornerstone in the evolution of humankind through the ages from the ancient Roman empire to the Olympics in the 21st century. These activities have been used as a benchmark to evaluate the how humans have

Sports activities have been a cornerstone in the evolution of humankind through the ages from the ancient Roman empire to the Olympics in the 21st century. These activities have been used as a benchmark to evaluate the how humans have progressed through the sands of time. In the 21st century, machines along with the help of powerful computing and relatively new computing paradigms have made a good case for taking up the mantle. Even though machines have been able to perform complex tasks and maneuvers, they have struggled to match the dexterity, coordination, manipulability and acuteness displayed by humans. Bi-manual tasks are more complex and bring in additional variables like coordination into the task making it harder to evaluate.

A task capable of demonstrating the above skillset would be a good measure of the progress in the field of robotic technology. Therefore a dual armed robot has been built and taught to handle the ball and make the basket successfully thus demonstrating the capability of using both arms. A combination of machine learning techniques, Reinforcement learning, and Imitation learning has been used along with advanced optimization algorithms to accomplish the task.
Date Created
2016
Agent

An investigation of topics in model-lite planning and multi-agent planning

154975-Thumbnail Image.png
Description
Automated planning addresses the problem of generating a sequence of actions that enable a set of agents to achieve their goals.This work investigates two important topics from the field of automated planning, namely model-lite planning and multi-agent planning. For model-lite

Automated planning addresses the problem of generating a sequence of actions that enable a set of agents to achieve their goals.This work investigates two important topics from the field of automated planning, namely model-lite planning and multi-agent planning. For model-lite planning, I focus on a prominent model named Annotated PDDL and it's related application of robust planning. For this model, I try to identify a method of leveraging additional domain information (available in the form of successful plan traces). I use this information to refine the set of possible domains to generate more robust plans (as compared to the original planner) for any given problem. This method also provides us a way of overcoming one of the major drawbacks of the original approach, namely the need for a domain writer to explicitly identify the annotations.

For the second topic, the central question I ask is ``{\em under what conditions are multiple agents actually needed to solve a given planning problem?}''. To answer this question, the multi-agent planning (MAP) problem is classified into several sub-classes and I identify the conditions in each of these sub-classes that can lead to required cooperation (RC). I also identify certain sub-classes of multi-agent planning problems (named DVC-RC problems), where the problems can be simplified using a single virtual agent. This insight is later used to propose a new planner designed to solve problems from these subclasses. Evaluation of this new planner on all the current multi-agent planning benchmarks reveals that most current multi-agent planning benchmarks only belong to a small subset of possible classes of multi-agent planning problems.
Date Created
2016
Agent

Extended LTLvis motion planning interface

154868-Thumbnail Image.png
Description
Robots are becoming an important part of our life and industry. Although a lot of robot control interfaces have been developed to simplify the control method and improve user experience, users still cannot control robots comfortably. With the improvements of

Robots are becoming an important part of our life and industry. Although a lot of robot control interfaces have been developed to simplify the control method and improve user experience, users still cannot control robots comfortably. With the improvements of the robot functions, the requirements of universality and ease of use of robot control interfaces are also increasing. This research introduces a graphical interface for Linear Temporal Logic (LTL) specifications for mobile robots. It is a sketch based interface built on the Android platform which makes the LTL control interface more friendly to non-expert users. By predefining a set of areas of interest, this interface can quickly and efficiently create plans that satisfy extended plan goals in LTL. The interface can also allow users to customize the paths for this plan by sketching a set of reference trajectories. Given the custom paths by the user, the LTL specification and the environment, the interface generates a plan balancing the customized paths and the LTL specifications. We also show experimental results with the implemented interface.
Date Created
2016
Agent

Human factors analysis of automated planning technologies for human-robot teaming

154073-Thumbnail Image.png
Description
Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.
Date Created
2015
Agent