Skip to main content

DESKBOT UPDATE 2

Project Overview

Project Goals:
  • Perform Image segmentation and object detection on a cluttered environment.
  • Enable multiple robot behaviours capable of aligning and pushing objects (within a payload).
  • Enable robot to clean surface using coverage path planning.

DeskBot System Workspace:

DeskBot Overview:

DeskBot gives two task option on the Graphic User Interface (GUI). 1) De-clutter, which de-clutters  the table by putting all the objects lying on the table in its respective place and 2) CleanOMatic, This works only when there are no objects lying on the table. The robot is integrated with a 3D printed brush and a defined map helps the robot move around the table to clean the dust particles on the table and dump the dust into an attached bin to the table.

Figure below shows an overview of the process. First, the GUI gives two options 1) De-Clutter, 2) CleanOMatic. Once the power is on the camera is initiated and it perceives the environment. In our case the office table with objects, and hamster robots. When De-clutter has selected the robot from its location plans a path to move the objects to their end location. CleanOMatic cleans the dust on the table with the help of a 3D printed brush integrated with the hamster robot. The robot moves along a definitive path on in the map to clear all the dust on the table and dump them in a dustbin which is attached to one end of the table. 






Components Required:

Hardware 
2D Camera 
Linux Based System 
Hamster Robots – 2 
Office Table 
Miscellaneous (Pens, Markers, Books, Etc)
Software 
Robot Operating System 
Python 
Tensorflow, Keras, Pytorch 
OpenCV 
GUI Interface


Timeline:
Literature Survey & Proposal - March 12
Object Detection and Mask R-CNN - March 26
Coverage Path Planning - April 9
Path Planning for Object Position Manipulation - April 23

Final Integration and Final Report - April 30

Comments

Popular posts from this blog

DESKBOT Update 7 - Final Update !

Final Overview & Results  The DeskBot system was setup up on an experimental desk testbed. The Intel Real sense camera was mounted directly above the center of the table at a distance of 30 inches. The right corner of the table is the robots initial resting location. Three major modules were used as explained in the updates previously, Scene Segmentation and Object Detection Robot Path Planning and Object Manipulation Coverage Path Planning The results pertaining to each of these modules are detailed below, Scene Segmentation and Object Detection To help the DeskBot system perceive and observe its environment YOLO object detector was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers, remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset was also created an annotated for the same calss of objects for training. As a future work more desk/workspace objects could be trained to be seamlessly decluttered an

DESKBOT Update 4

The Hamster Robot and Coverage Path Planning: Hamster Robot: The Hamster robot is a small and lovely robot for software education. It includes various devices as shown in the following figures. The Hamster robot can be programmed and controlled over various languages such as Python, C, Processing IDE etc. For our implementation, we plan to program the robot using python. The Hamster robot will be used in three ways: For locomotion i.e. to push objects to their slots during cleaning and moving during Coverage Path Planning (CPP). This will require control of the DC motors. For detecting the edge of the workspace (desk in our case). It is crucial that during robot locomotion we do not fall off the desk. As a safety measure for this purpose the Infrared Floor Sensors will be constantly polled to look for the edge of the workspace to stop the robot. For detecting, if there exists a contact between the robot and the object. The proximity sensors will be used and wil

DESKBOT UPDATE 3

Scene Segmentation and Object Detection The robot has to know its environment before taking an action, a sensor is required is to perceive the environment and know what things exist. In our case, we use a 2D camera to know where and how the objects and the robots are positioned. We use Mask R-CNN to perform instance segmentation and object detection or use YOLO for object detection.  Mask R-CNN is divided into two modules, first, it estimates the regions where the objects can exist on the input image. Second, based on the initial estimation it identifies the class of the object and generates a mask in the pixel level. In the initial step, the RPN (Residual Pooling Network) scans all FPN (Feature Pyramid Network) in a top-bottom approach and estimates where the objects exist on the input image. Once the estimation is done a bounding box is assigned to the anchor (anchors are a set of boxes with predefined locations). RPN helps in the anchor to decide where in the feature map