Skip to main content

DESKBOT Update 7 - Final Update !

Final Overview & Results 


The DeskBot system was setup up on an experimental desk testbed. The Intel Real sense camera was mounted directly above the center of the table at a distance of 30 inches. The right corner of the table is the robots initial resting location. Three major modules were used as explained in the updates previously,
  • Scene Segmentation and Object Detection
  • Robot Path Planning and Object Manipulation
  • Coverage Path Planning

The results pertaining to each of these modules are detailed below,

Scene Segmentation and Object Detection

To help the DeskBot system perceive and observe its environment YOLO object detector was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers, remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset was also created an annotated for the same calss of objects for training. As a future work more desk/workspace objects could be trained to be seamlessly decluttered and handled with DeskBot. In the scene the robots were also successfully detected with an accuracy of 100% using AR Markers and the ArUco python bindings. Figure below shows example output detection from the YOLO object detector and the Robot detector.

Robot Path Planning:

Two Hamster robots were successfully integrated into the DeskBot system as served as the heroes of the entire system. The robots were controlled using the Roboid python package and always moved at constant speeds set for pushing and turning.
Jump Point Search algorithm was implemented to obtain a path between the object and its goal location. Coverage Path Planning using a Breadth First Search was also implemented successfully as shown in figure below,


Figure. Shows a example result of the DeskBot system. Images are arranged in sequence as pointed by the arrows. a) shows the robots in the initial resting position with objects scattered on the table (pen, stapler). In b) one robot is moving towards the pencil to orient it along the path as shown in c). The pencil holder is the green box on the right corner. Once the robot is reoriented as shown in c) the second robot positions itself on the other edge of the robot to push it as shown in d) and e). Once the pencil is placed the robots return to the initial resting position as shown in f). For the second object, the stapler the same operation follows and then g) shows the initial state of coverage path planning. The robot on the left is positioning itself to coverage path plan. h) shows the robot during the task  of coverage path planning and i) is the robot returning to the initial position after dumping dust in the corner next to the stapler holder in the left. The sequence of images from a) – i) show the entire DeskBot operations


The entire DeskBot system was tested in 10 trials with either 2 or 3 objects cluttered on the table. The system decluttered and cleaned the table successfully every time. An example is shown in figure above of the modes of operation of the DeskBot.  A limitation was the amount of time taken by the robot to accomplish the task. On average DeskBot took 3 minutes to declutter 2 objects and 7 minutes to declutter 3 objects on a table. Coverage Path Planning intended to clean the desk took on average 2 minutes. Time was mostly consumed during robot manipulating the object to orient it. Another factor was the robot adjusting its own orientation due to skidding on the surface. This makes DeskBot a very time-consuming companion, nevertheless a companion to ensure your desks hygiene.

Limitations:
  • One of the major limitations was not being able to orient completely due to YOLO.
  • No Obstacle Avoidance considerations in path planning.
  • More controlled was to orient the object towards goal
  • Lighting Conditions
Future Work:
  • Using MaskRCNN for Object detection and segmentation
  • Using a modified version of A* path planning with JPS to deal with obstacles on map
  • Train objects under varying light conditions

Demo Video:










Comments

Popular posts from this blog

DESKBOT Update 4

The Hamster Robot and Coverage Path Planning: Hamster Robot: The Hamster robot is a small and lovely robot for software education. It includes various devices as shown in the following figures. The Hamster robot can be programmed and controlled over various languages such as Python, C, Processing IDE etc. For our implementation, we plan to program the robot using python. The Hamster robot will be used in three ways: For locomotion i.e. to push objects to their slots during cleaning and moving during Coverage Path Planning (CPP). This will require control of the DC motors. For detecting the edge of the workspace (desk in our case). It is crucial that during robot locomotion we do not fall off the desk. As a safety measure for this purpose the Infrared Floor Sensors will be constantly polled to look for the edge of the workspace to stop the robot. For detecting, if there exists a contact between the robot and the object. The proximity sensors will be used and wil

DESKBOT UPDATE 3

Scene Segmentation and Object Detection The robot has to know its environment before taking an action, a sensor is required is to perceive the environment and know what things exist. In our case, we use a 2D camera to know where and how the objects and the robots are positioned. We use Mask R-CNN to perform instance segmentation and object detection or use YOLO for object detection.  Mask R-CNN is divided into two modules, first, it estimates the regions where the objects can exist on the input image. Second, based on the initial estimation it identifies the class of the object and generates a mask in the pixel level. In the initial step, the RPN (Residual Pooling Network) scans all FPN (Feature Pyramid Network) in a top-bottom approach and estimates where the objects exist on the input image. Once the estimation is done a bounding box is assigned to the anchor (anchors are a set of boxes with predefined locations). RPN helps in the anchor to decide where in the feature map