Skip to main content

Posts

DESKBOT Update 7 - Final Update !

Final Overview & Results  The DeskBot system was setup up on an experimental desk testbed. The Intel Real sense camera was mounted directly above the center of the table at a distance of 30 inches. The right corner of the table is the robots initial resting location. Three major modules were used as explained in the updates previously, Scene Segmentation and Object Detection Robot Path Planning and Object Manipulation Coverage Path Planning The results pertaining to each of these modules are detailed below, Scene Segmentation and Object Detection To help the DeskBot system perceive and observe its environment YOLO object detector was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers, remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset was also created an annotated for the same calss of objects for training. As a future work more desk/workspace objects could be trained to be seamlessly decluttered an
Recent posts

DESKBOT Update 6

Robot Path Planning for Object Manipulation Over the past several weeks we've shown how the DeskBot system performs Scene segmentation and Object detection. We have also covered the Hamster robot and its role. In this update lets dive into the robot operations.  Once the system is capable of perceiving the scene and detect all the objects present in it, we can enable our robot with multiple behaviors aimed at manipulating an object and pushing it towards a goal. From the Scene Segmentation and Object Detection module we get the following, Positions and centroids of the robot Positions and centroid of the detected unique objects Table dimensions and location of the object holder goal points Euclidian distance from the objects to their respective goal holder Figure below shows a technical overview of how Robot – object manipulation is achieved. Using all the above information from the scene segmentation module we can obtain closed loop vision fe

DESKBOT Update 5

YOLO - You Only Look Once   Other methods use classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. The high scoring regions of the image are considered detection.        In YOLO the approach is quite different. Neural network is applied for the full image and this network divides the full image into regions and predicts a bounding box or bounding boxes. These bounding boxes are given weights by the predicted probability.        We were supposed to use MaskrCNN to get better accuracy but due to the COVID 19 situation and work from home we do not have a good computing system and GPU resources therefore we have chosen to work with YOLO. YOLO is faster and gives a boundary boxes for the objects detected.        YOLO works on COCO dataset, which is trained on 80 different classes. YOLO is not trained for detecting pens, staplers, erasers and many other objects which we are conc

DESKBOT Update 4

The Hamster Robot and Coverage Path Planning: Hamster Robot: The Hamster robot is a small and lovely robot for software education. It includes various devices as shown in the following figures. The Hamster robot can be programmed and controlled over various languages such as Python, C, Processing IDE etc. For our implementation, we plan to program the robot using python. The Hamster robot will be used in three ways: For locomotion i.e. to push objects to their slots during cleaning and moving during Coverage Path Planning (CPP). This will require control of the DC motors. For detecting the edge of the workspace (desk in our case). It is crucial that during robot locomotion we do not fall off the desk. As a safety measure for this purpose the Infrared Floor Sensors will be constantly polled to look for the edge of the workspace to stop the robot. For detecting, if there exists a contact between the robot and the object. The proximity sensors will be used and wil

DESKBOT UPDATE 3

Scene Segmentation and Object Detection The robot has to know its environment before taking an action, a sensor is required is to perceive the environment and know what things exist. In our case, we use a 2D camera to know where and how the objects and the robots are positioned. We use Mask R-CNN to perform instance segmentation and object detection or use YOLO for object detection.  Mask R-CNN is divided into two modules, first, it estimates the regions where the objects can exist on the input image. Second, based on the initial estimation it identifies the class of the object and generates a mask in the pixel level. In the initial step, the RPN (Residual Pooling Network) scans all FPN (Feature Pyramid Network) in a top-bottom approach and estimates where the objects exist on the input image. Once the estimation is done a bounding box is assigned to the anchor (anchors are a set of boxes with predefined locations). RPN helps in the anchor to decide where in the feature map

DESKBOT UPDATE 2

Project Overview Project Goals: Perform Image segmentation and object detection on a cluttered environment. Enable multiple robot behaviours capable of aligning and pushing objects (within a payload). Enable robot to clean surface using coverage path planning. DeskBot System Workspace: DeskBot Overview: DeskBot gives two task option on the Graphic User Interface (GUI). 1) De-clutter, which de-clutters  the table by putting all the objects lying on the table in its respective place and 2) CleanOMatic, This works only when there are no objects lying on the table. The robot is integrated with a 3D printed brush and a defined map helps the robot move around the table to clean the dust particles on the table and dump the dust into an attached bin to the table. Figure below shows an overview of the process. First, the GUI gives two options 1) De-Clutter, 2) CleanOMatic. Once the power is on the camera is initiated and it perceives the environment. In our case the of