Wednesday, June 11, 2008

Robots to Help in Homecare

The Center for Healthcare Robotics within the Health Systems Institute at Georgia Institute of Technology and Emory University are looking at ways that robots can be used to help when providing homecare for patients. The research team led by Charlie Kemp, Director of the Center has found a way to instruct a robot named El-E to find and deliver items it may have never seen before by using a laser pointer. The researchers are now gathering input from ALS patients and their doctors to use to prepare the robot to assist patients with severe mobility challenges.

The verbal instructions a person gives to help find an object are very difficult for a robot to use. These commands require the robot to understand everyday human language and a description of the object at a level well beyond the state-of-the-art in language recognition and object perception. According to Wallace H. Coulter, Department of Biomedical Engineering at Georgia Tech and Emory, “Robots have some ability to retrieve specific predefined objects but retrieving generic everyday objects is a challenge for robots.”

The laser pointer interface and methods developed by Kemp’s team is overcoming this challenge by providing a direct way for people to communicate the location of interest to El-E and ways that will enable the robot to pick up an object found at this location. Through these innovations, the robot can retrieve objects without understanding what the object is or what it is called.

The researchers see fetching as a core capability for future robots in healthcare settings such as the home. In the home, El-E is able to find objects since there are common structures found indoors. In the home, most objects are found on smooth flat surfaces that have a uniform appearance such as floors, table, and shelves. Regardless of height, the robot is able to localize and pick up objects by elevating the arm and sensors to match the height of the object’s location.

The robot uses a custom-built camera that is omni-directional to see most of the room. After the robot detects that a selection has been made with the laser pointer, the robot moves two cameras to look at the laser spot and triangulate its position in three dimensional space.

Next the robot estimates where the item is located. If the location is above the floor, the robot finds the edge of the surface on which the object is sitting, such as on the edge of a table. The robot then uses the laser range finder to scan across the surface to locate the object. Then the robot moves its hand above the object, uses a camera in its hand to visually distinguish the object from the texture of the floor or table. After refining the hand’s position, the robot descends upon the object while using sensors to decide when to stop moving down and closes upon the object with a secure grip.

Once the robot has picked up the item, the laser pointer can be used to guide the robot to another location to deposit the item or direct the robot to take the item to a person. El-E is able to distinguish between these two situations by looking for a face near the selected location and then is able to present the item.

The researchers are now working to help El-E expand capabilities that will include switching lights on and off when the user selects a light switch and opening and closing doors when the user selects a door knob.