Blog

Latest entries for tag 'video'

Video about Hybrid Reasoning on the PR2

We have posted a video introducing the DFG Research Unit on Hybrid Reasoning in general, and the C1 Robotics sub-project in particular. We present a demo on the PR2 robot that will serve as a baseline system and testbed for further research. The C1 project is a joint effort of the Research Lab Autonomous Intelligent Systems, University of Freiburg, the Knowledge-Based Systems Group, RWTH Aachen University, and the Research Group Foundations of Artificial Intelligence, University of Freiburg.

The Hybrid Reasoning C1 Robotics project investigates effective and efficient ways to combine quantitative and qualitative aspects of knowledge representation and reasoning. In the video in particular, we implemented a baseline system to work on active perception. We want the robot to reason on its current beliefs and if necessary decide what to do to improve them.

The base system was based on ROS for the PR2 and the TidyUpRobot demo from Freiburg. The PDDL-defined planning domain was adapted for action planning in our active perception scenario. The perception system was based on Fawkes' tabletop-objects plugin and the generic robot database recording with MongoDB. The planner decided on a sequence of positions to move to and waited a short time at each position and noted the timestamp at such a position. The database recording was running all the time, storing in particular transforms, images, and point clouds of the Kinect. After each position, the pcl-db-merge plugin was triggered by the planner as another action to merge and align the point clouds. The data for the recorded time stamps was retrieved from the database. The initial point cloud offset estimate was based on the robot's AMCL ( Fawkes port) pose information (at the respective recording time of the point cloud). Then, the point clouds were further aligned using pair-wise ICP alignment (implemented using PCL). The perception pipeline itself was improved to determine cylinder-shaped objects like certain cups. The pipeline was run on the merged point clouds, eventually leading to better results because occlusion shadows and incomplete object shapes had been resolved by taking data from multiple perspectives into account.

In the future, we want to integrate the system with Readylog to model the belief states and reason on their current quality.

The Fawkes related code is available in the timn/pcl-db-merge and abdon/tabletop-recognition branches. The planning code is in the hybris_c1-experimental branch of the alufr-ros-pkg repository.

Posted by Tim Niemueller on March 18, 2013 14:59