Skip to main content

CNIT 581 - Software Design and Development for Robotics


This is a blog for the course CNIT 581 Software Design and Development in Robotics taught by Prof. Byung-Cheol Min at Purdue University for Spring 2020. 

Project Name:- DESKBOT
The aim of this project is to develop a robotic system to arrange/unclutter the average office table environment. 

Image result for anki vector on a table

Key Components of the project:

  • Robot Motion planning and execution 
  • Machine Vision to observe all objects in the environment.
Team Members:- 
  • Hitesh V Gokaraju
  • Vishnunandan LN Venkatesh

Comments

Popular posts from this blog

DESKBOT Update 7 - Final Update !

Final Overview & Results  The DeskBot system was setup up on an experimental desk testbed. The Intel Real sense camera was mounted directly above the center of the table at a distance of 30 inches. The right corner of the table is the robots initial resting location. Three major modules were used as explained in the updates previously, Scene Segmentation and Object Detection Robot Path Planning and Object Manipulation Coverage Path Planning The results pertaining to each of these modules are detailed below, Scene Segmentation and Object Detection To help the DeskBot system perceive and observe its environment YOLO object detector was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers, remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset was also created an annotated for the same calss of objects for training. As a future work more desk/workspace objects could be trained to be seamlessly declu...

DESKBOT Update 6

Robot Path Planning for Object Manipulation Over the past several weeks we've shown how the DeskBot system performs Scene segmentation and Object detection. We have also covered the Hamster robot and its role. In this update lets dive into the robot operations.  Once the system is capable of perceiving the scene and detect all the objects present in it, we can enable our robot with multiple behaviors aimed at manipulating an object and pushing it towards a goal. From the Scene Segmentation and Object Detection module we get the following, Positions and centroids of the robot Positions and centroid of the detected unique objects Table dimensions and location of the object holder goal points Euclidian distance from the objects to their respective goal holder Figure below shows a technical overview of how Robot – object manipulation is achieved. Using all the above information from the scene segmentation module we can obtain closed loop vision fe...

DESKBOT Update 5

YOLO - You Only Look Once   Other methods use classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. The high scoring regions of the image are considered detection.        In YOLO the approach is quite different. Neural network is applied for the full image and this network divides the full image into regions and predicts a bounding box or bounding boxes. These bounding boxes are given weights by the predicted probability.        We were supposed to use MaskrCNN to get better accuracy but due to the COVID 19 situation and work from home we do not have a good computing system and GPU resources therefore we have chosen to work with YOLO. YOLO is faster and gives a boundary boxes for the objects detected.        YOLO works on COCO dataset, which is trained on 80 different classes. YOLO is not trained for detecting p...