Mobile Robots in Human Environments

This page contains a description of my phd work.


Affiliation

I am a part of the mobile robotics group at the Section for Automation and Control, Department of Electronic Systems, Aalborg University, Denmark. In the group we do various stuff, like biped robots, UAV's, swarm robotics and Human-Robot Interaction. But we all have the common vision, which is Robots Among People. This is a very broad statement, but it basically means that do not do robotic stuff that will not at some point in the future, appear in human daily life. So for example industrial robots are not a focus area.

Ph.D. project summary

This is an overview over the Ph.D. project. For a more technical presentation, please see the different publications. The title of the project is:

Mobile Robots in Human Environments - towards safe, natural and comfortable navigation

My work can be categorised within the field human-robot interaction. More specifically I am working with both the human and robot motion relating to the interaction. For the robot case this means how robots should move, such that it is sociable, natural and comfortable for humans in an everyday environment. Related to persons, the aim is finding out in which state humans are, i.e. are the humans talking to other persons, are they just walking around, or are they interested in getting into interaction with the robot. Thus, the human state has huge impact on how the robot should move. The hypothesis of the project is that it is possible to enable robots in an everyday environment with capabilities, such that they, in a natural way, are capable of moving around, estimating human intentions and interacting with humans or other robots.
The picture on the right is Robotino, a small robot which we use for experiments in the project. The robot is also used for all sorts of exhibitions and publicity events for the university.
In the first part of the project, I have been working on trying to make a robot learn to find out if humans are interested in interacting with the robot. This is done only based on the measured human motions relative to the robot. When the interest in interaction is estimated, the next step is to get the robot to move appropriately according to the situation.
Intially the motion algorithms only considers motion around one person, but be able to move around in the real world, the robot must be able to navigate in environments with many people. Therefore the next part of the project concerns making an algorithm that is able to plan a safe and natural path though a more densely crowded human environment.

More details about the project

In the beginning of my Ph.D. project (December 2007), we made a pilot study in Kennedy Arkaden in Aalborg. The idea was to put an autonomous robot in the middle of a public area and see what happened. The purpose was twofold. Firstly we wanted to gather vital information and experience about the technical challenges of putting a robot in a real public environment, instead of just the lab. Secondly we wanted to see peoples reaction to a fully autonomous robot, and how they felt about having robots in their everyday environment in the future. Below is some images and a video of the experiment.

Get the Flash Player to see this player.

It was quite interesting to see how people reacted to the robot, and we also discovered some challenges associated with putting a robot out into the real world. So in general the experiment was a success, and provided basis for further research about how a robot should behave in a human environement.
As described above, the robot should be able to find out if a person is willing to engage into close interaction with the robot. This cabability was obtained using a Case-Based Reasoning (CBR) approach. CBR is basically very simple. It consists of a database, which saves information about what the robot has experienced in the past, like the human brain. When the robot experiences something new, it is compared to the information and outcome of previous experiences, which is stored in the database. This information is used to find out what to do now. So generally the robot can reason like this: last time I saw a person moving around like this, he wanted to interact with me -this person I am observing now probably wants to interact as well then. The motion around a person is then governed by a potential field, that changes according to the person's interest in interaction. The potentials fields is shown in the following figures. The person is at the center and looking to the right, and the robot will try to move away from the red areas and towards the blue areas. The left figure shows a case, where the person is interested in interaction. In this case the robot will move towards the blue area in front of the person. The center figure shows a case, where the robot is not sure about if the person is interested or not interested in interaction. The right figure shows a case, where the person is not interested in interaction. In this case the robot will move away from the person, but still within the area, that the person can see and for example not behind the person, which would be uncomfortable for the person.
So, how to make a robot able to plan a trajectory through a more populated human environment? Imagine you are walking down a pedestrian street or a train station with a lot of people. Then you have to continuously plan which ways around the oncoming people you want to take. If there is a lot of people it actually takes up a lot of your attention, since you do not want to crash into other people. One way to make a robot do this is to make a 3D potential field landscape of the people in front of you, and plan your route through this landscape. The landscape contains peaks where there is humans and lower parts where there is low possibility of banging into other people. So the objective of the robot is to plan a trajectory through the valleys of the landscape in front, such that the possibility of hitting persons will be minimised. An example of such a landscape with five persons and different possible trajectories are shown in the following figures. The green dot to the right is where the robot starts.

So a bit more mathematically, the problem is to minimise the cost of traversing the potential field landscape, where the constraints of the robot motion must be taken into accound. The constraints on the robot motion is for example maximum speed, that the robot cannot move sideways and it cannot change speed or direction instantly. The solution for designing such an algorithm, which is able to plan a path through the environment, has be obtained using a Rapidly-exploring Random Tree (RRT) algorithm. This is basically an algorithm, which tries some random trajectories, and picks the one, which is best. See this paper for a much more technical explanation. An example of an RRT can be seen in the following figure. All the red trajectoris is different possible trajectories for the robot. The green one is the one that has the lowest cost, and has therefore been selected.



For simple real world experiments with the robot moving around in an environement with one person, the Robotino robot shown above has been used. But the Robotino robot is relative slow in environments with many people moving around. Therefore, a new robot, a Segway RMP, with a lot faster dynamics have been acquired for that purpose. Below is a video demonstrating the robot moving around in our laboratory performing localization, path planning and local obstacle avoidance. The speed is very slow in the video, but can be much faster. This demonstrated work has been done by an 8. semester group, which i have supervised. Their homepage can be found here.

Get the Flash Player to see this player.


A more thorough and technical description of the plans for the Ph.D. work can be found in my updated studyplan, which was made after one year of the Ph.D. study, or in the paper from the "Writing and Reviewing Scientific Papers" course below. The latter contains a more popular version of the project, which should be possible to understand for a non-technical person. This page is not fully updated anymore. But a full version of the final Ph.D. thesis, can be found here.

Attended Ph.D. Courses

Most of the courses is from the course catalog from the doctoral school of technology and science at Aalborg University

Course Date Description/Topics ECTS
Workshop: Motion Planning - from Theory to Practice June 27 - 2010 Workshop on motion planning and how it is used in practical applications. Mostly sampling based motion planning.
At the conference RSS2009
2
Mobile Robotics Group Whole phd period Activities relating to the Mobile Robotics Group, like:
-Meetings
-Taking part in tours to DTU and SDU
1
Workshop: Probabilistic approaches for control and robotics December 11 - 2009 How to use probabilistic methods for controlling robots (particulary for trajectory planning)
(Workshop at the conference NIPS-2009)
1
Workshop: People Detection and Tracking May 12 - 2009 Workshop on how to detect and track persons using image processing and/or laser scanners
(At the conference ICRA-2009)
1
Problems of Advanced Optimization April 20, 21, 27, 28, 29 - 2009 Introduction to optimization problems
Optimal Control Theory
Calculus of Variations
Pontryagin maximum principle
Other Applications
3
Advanced Optimization April 15-17 + May 4 - 2009 Variational methods
Unconstrained and constrained problems
Lagrange multipliers
Kuhn-Tucker optimality conditions
Numerical methods for minimization of functions
Constrained minimization (linear programming, Simplex method, Convex programming...)
Conlin and MMA
Sensitivity analysis
Mixed integer programming
4
Writing and Reviewing Scientific Papers September 16 and November 25 - 2008 Learning how to write a scientific paper
Writing a popupular paper about the phd project
(The others on the course must be able to understand it)
Review Papers from other participants
(Course Desctiption) and (final paper)
3,75
Bayesian Statistics, Simulations and Software - with a view to application examples May 2,4,6,9,11,13 - 2008 Basic bayesian statistic theory
Simulation based bayesian inference
Gibbs sampling
Markov chain theory
SW: R, JAGS, WinBUGS, DoodleBUGS
4
Robot Motion 28 and 29 April 2008 + 19 and 20 May 2008 Configuration space
Potential functions
Voronoi diagrams
Road Maps
Visibility graphs
Kalman filtering
Particle filtering Mapping
Cell decompositions
Trajectory planning
Motion Planning of non-holonomic systems
3
Structurally Constrained Control Thursdays, spring 2008 Decentralized Control and Estimation
Hierachical Control
Distributed control and estimation
Parametrisation of structured controllers
Cooperative control
Additive controller design
Control structure design
2
Tutorial on Experimental Design March 12 - 2008 How to design Human-Robot Interaction experiments
(At the conference HRI-2008)
½
Workshop in metrics in Human-Robot Interaction March 12 - 2008 Metrics in HRI
(At the conference HRI-2008)
½
PBL 30,31 Jan. + 24 april 2008 Problem Based Learning 1
Machine Learning 5,7,9,13,15 Dec. 2007 Decision theory
Parametric and non-parametric methods
Dimension reduction
Clustering
Linear discrimination and SVM
Neural Networks
Reinforcement learning
3
Total 29,75
Valid CSS! Valid XHTML 1.0!