Skip to main content

DESKBOT Update 6

Robot Path Planning for Object Manipulation

Over the past several weeks we've shown how the DeskBot system performs Scene segmentation and Object detection. We have also covered the Hamster robot and its role. In this update lets dive into the robot operations. 

Once the system is capable of perceiving the scene and detect all the objects present in it, we can enable our robot with multiple behaviors aimed at manipulating an object and pushing it towards a goal. From the Scene Segmentation and Object Detection module we get the following,

  • Positions and centroids of the robot
  • Positions and centroid of the detected unique objects
  • Table dimensions and location of the object holder goal points
  • Euclidian distance from the objects to their respective goal holder

Figure below shows a technical overview of how Robot – object manipulation is achieved. Using all the above information from the scene segmentation module we can obtain closed loop vision feedback from a scene and then implement path planning to enable a robot to push an object from its position to goal along the generated path. As shown in Figure below. Path planning is achieved through Jump Point Search which is explained in detail in the sections detailed out in this update. 


Jump Point Search

Jump point search is an optimization to the A* search algorithm path finding algorithm for uniform-cost grids. It reduces symmetries in the search procedure by means of graph pruning, eliminating certain nodes in the grid based on assumptions that can be made about the current node's neighbors, as long as certain conditions relating to the grid are satisfied. As a result, the algorithm can consider long jumps along straight (horizontal, vertical and diagonal) lines in the grid , rather  than the small steps from one grid position to the next as in A*. This makes the algorithm very fast and this also helps the algorithm reduce the amount of nodes it adds into its queue. Unlike greedy best first and A* algorithm, Jump Point Search [1] explores all the directions till it finds the goal in any one of the directions. This may at times result in the algorithm taking more time than the other two algorithms especially in the case of very big maps.

Starting from a point S, JPS starts performing a local search over the main 4 cardinal directions. For every direction, the algorithm searches for,
  • The goal 
  • A point in which an optimal path changes direction or observes a forced neighbor.
  • If a direction ends in an obstacle the direction is completely discarded and not explored any more.
  • If a direction contains goal or forced neighbor, that node is added to the open list for more exploration.

Searching for a forced neighbor is a local search process. At each step only the 8 nodes surrounding the current node are considered when looking for a forced neighbor. When the goal is found the jump points (nodes in open list) are connected through straight horizontal and vertical paths nodes to produce the optimal path. 

In the implementation of the JPS we assumed every node on the map to be equal approximately to 1.5 times the size of the robot. The goal positions were fixed positions on the table depending on the object holders’ locations. The JPS algorithm is carried out to find a path between the object and the goal and then the robot manipulated the object to push it along the way-points in the path to reach the goal. 


Figure shows an example search of the Jump Point Search algorithm. 

Jump Point Search with Robot - Object Manipulation


Orient Object - One Hamster robot will be used to orient object slope to that of the drop position. Orientation is approximated by calculating the slope of the diagonal of the bounding box of the object obtained from YOLO.

Push Object   - Two Hamster robots will work in unison and will push the objects once oriented towards goal or the path given by JPS. 

Important Note:

Since our goal is to transport the object from Location A to B. We have to Orient the Object and push the object along the path to goal. If our goal was to move robot from A to B then the Orient Object Step is not required. 

Update 7


The next update will show the integration of the Scene Segmentation and Object Detection module with the Robot Path Planning for Object Manipulation module. This would basically bring the whole DeskBot system together and we should see DeskBot declutter and cleaning desks !!! :) 


References

[1] https://zerowidth.com/2013/a-visual-explanation-of-jump-point-search.html



















Comments

Popular posts from this blog

DESKBOT Update 7 - Final Update !

Final Overview & Results  The DeskBot system was setup up on an experimental desk testbed. The Intel Real sense camera was mounted directly above the center of the table at a distance of 30 inches. The right corner of the table is the robots initial resting location. Three major modules were used as explained in the updates previously, Scene Segmentation and Object Detection Robot Path Planning and Object Manipulation Coverage Path Planning The results pertaining to each of these modules are detailed below, Scene Segmentation and Object Detection To help the DeskBot system perceive and observe its environment YOLO object detector was implemented to classify 5 classes (pencil, erasers/rubbers, pens, staplers, remotes). YOLO was successfully implemented with an accuracy of 90%. A dataset was also created an annotated for the same calss of objects for training. As a future work more desk/workspace objects could be trained to be seamlessly decluttered an

DESKBOT Update 4

The Hamster Robot and Coverage Path Planning: Hamster Robot: The Hamster robot is a small and lovely robot for software education. It includes various devices as shown in the following figures. The Hamster robot can be programmed and controlled over various languages such as Python, C, Processing IDE etc. For our implementation, we plan to program the robot using python. The Hamster robot will be used in three ways: For locomotion i.e. to push objects to their slots during cleaning and moving during Coverage Path Planning (CPP). This will require control of the DC motors. For detecting the edge of the workspace (desk in our case). It is crucial that during robot locomotion we do not fall off the desk. As a safety measure for this purpose the Infrared Floor Sensors will be constantly polled to look for the edge of the workspace to stop the robot. For detecting, if there exists a contact between the robot and the object. The proximity sensors will be used and wil

DESKBOT UPDATE 3

Scene Segmentation and Object Detection The robot has to know its environment before taking an action, a sensor is required is to perceive the environment and know what things exist. In our case, we use a 2D camera to know where and how the objects and the robots are positioned. We use Mask R-CNN to perform instance segmentation and object detection or use YOLO for object detection.  Mask R-CNN is divided into two modules, first, it estimates the regions where the objects can exist on the input image. Second, based on the initial estimation it identifies the class of the object and generates a mask in the pixel level. In the initial step, the RPN (Residual Pooling Network) scans all FPN (Feature Pyramid Network) in a top-bottom approach and estimates where the objects exist on the input image. Once the estimation is done a bounding box is assigned to the anchor (anchors are a set of boxes with predefined locations). RPN helps in the anchor to decide where in the feature map