Final Overview & Results The DeskBot system was setup up on an experimental desk testbed. The Intel Real sense camera was mounted directly above the center of the table at a distance of 30 inches. The right corner of the table is the robots initial resting location. Three major modules were used as explained in the updates previously, Scene Segmentation and Object Detection Robot Path Planning and Object Manipulation Coverage Path Planning The results pertaining to each of these modules are detailed below, Scene Segmentation and Object Detection To help the DeskBot system perceive and observe its environment YOLO object detector was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers, remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset was also created an annotated for the same calss of objects for training. As a future work more desk/workspace objects could be trained to be seamlessly decluttered an
Robot Path Planning for Object Manipulation Over the past several weeks we've shown how the DeskBot system performs Scene segmentation and Object detection. We have also covered the Hamster robot and its role. In this update lets dive into the robot operations. Once the system is capable of perceiving the scene and detect all the objects present in it, we can enable our robot with multiple behaviors aimed at manipulating an object and pushing it towards a goal. From the Scene Segmentation and Object Detection module we get the following, Positions and centroids of the robot Positions and centroid of the detected unique objects Table dimensions and location of the object holder goal points Euclidian distance from the objects to their respective goal holder Figure below shows a technical overview of how Robot – object manipulation is achieved. Using all the above information from the scene segmentation module we can obtain closed loop vision fe