Final Overview & Results
The
DeskBot system was setup up on an experimental desk testbed. The Intel Real sense
camera was mounted directly above the center of the table at a distance of 30
inches. The right corner of the table is the robots initial resting location. Three
major modules were used as explained in the updates previously,
- Scene Segmentation and Object Detection
- Robot Path Planning and Object Manipulation
- Coverage Path Planning
The
results pertaining to each of these modules are detailed below,
Scene Segmentation and Object Detection
To help
the DeskBot system perceive and observe its environment YOLO object detector
was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers,
remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset
was also created an annotated for the same calss of objects for training. As a
future work more desk/workspace objects could be trained to be seamlessly
decluttered and handled with DeskBot. In the scene the robots were also successfully detected with an accuracy
of 100% using AR Markers and the ArUco python bindings. Figure below shows
example output detection from the YOLO object detector and the Robot detector.
Robot Path Planning:
Two Hamster robots were successfully integrated into the
DeskBot system as served as the heroes of the entire system. The robots were
controlled using the Roboid python package and always moved at constant speeds
set for pushing and turning.
Jump Point Search algorithm was implemented to obtain a
path between the object and its goal location. Coverage Path Planning using a
Breadth First Search was also implemented successfully as shown in figure
below,
Figure. Shows a example result of the DeskBot system. Images are
arranged in sequence as pointed by the arrows. a) shows the robots in the
initial resting position with objects scattered on the table (pen, stapler). In
b) one robot is moving towards the pencil to orient it along the path as shown
in c). The pencil holder is the green box on the right corner. Once the robot
is reoriented as shown in c) the second robot positions itself on the other
edge of the robot to push it as shown in d) and e). Once the pencil is placed
the robots return to the initial resting position as shown in f). For the second
object, the stapler the same operation follows and then g) shows the initial
state of coverage path planning. The robot on the left is positioning itself to
coverage path plan. h) shows the robot during the task of coverage path planning and i) is the robot
returning to the initial position after dumping dust in the corner next to the
stapler holder in the left. The sequence of images from a) – i) show the entire
DeskBot operations
The entire DeskBot system was tested in 10 trials with
either 2 or 3 objects cluttered on the table. The system decluttered and
cleaned the table successfully every time. An example is shown in figure above of the modes of operation of the DeskBot. A limitation was the amount of time
taken by the robot to accomplish the task. On average DeskBot took 3 minutes to
declutter 2 objects and 7 minutes to declutter 3 objects on a table. Coverage
Path Planning intended to clean the desk took on average 2 minutes. Time was
mostly consumed during robot manipulating the object to orient it. Another
factor was the robot adjusting its own orientation due to skidding on the
surface. This makes DeskBot a very time-consuming companion, nevertheless a
companion to ensure your desks hygiene.
Limitations:
- One of the major limitations was not being able to orient completely due to YOLO.
- No Obstacle Avoidance considerations in path planning.
- More controlled was to orient the object towards goal
- Lighting Conditions
Future Work:
- Using MaskRCNN for Object detection and segmentation
- Using a modified version of A* path planning with JPS to deal with obstacles on map
- Train objects under varying light conditions
Comments
Post a Comment