The perfect choice of one-stop service for diversification of architecture.
Introduction-This is the 3rd and final blog post for this project. We have as far seen how the quad-rotor gets itself into an unknown world with a laser sensor and tries to get sense of the unknown world. As the blog progresses we will dive into the concepts behind the progress made during each milestone that is covered.
Milestones covered during this project are-Milestones Covered:We were able to obtain an occupancy grid from the rviz for the given environment.Successfully controlled the Quad rotor all around gazebo world and mapped it on rviz through slam.The Quad-rotor was able to locate humans and fire(heat-zones) with blue and red cubes on the rviz.
From the rviz, we extracted occupancy grid in one dimensional form.Converted the occupancy grid into 2-D.Filtered the noise from the obtained grid to make it useful.
Implemented A star algorithm to get a shortest and safest path to the exit from the safe zone. OCCUPANCY GRIDAn occupancy grid is an output which we obtain from the rviz based on what the quadrotor had observed in the gazebo environment. It is nothing but a 1-D array of the environment it has observed once it has been taken through the entire course.
This would be useful in making the quadrotor to perform specific functions as this is the environment in which the drone has realised its environment and not the gazebo one. But before we can take these values and perform further manipulation we need to filter out certain values of the grid.FILTERING OCCUPANCY GRID:The values if printed from what is saved in the occupancy grid we will be able to see a 1-D array which comprises of basically three values :Zero(0):-It depicts free space in the map.
(Negative 1):-This is showing up in the array of value because of the fact that the Laser sensor which has been used to perform the SLAM module will have higher ranges than the environment in which it has been confined and these are representation of blocks which are beyond the environment and the quad-rotor is unaware of its possibilities whether it is free or occupied, So, technically it depicts unknown. (Numerical 100) :-This depicts that there is 100% probability of having an obstacle.Thus we need to filter out the values of -1 from the array as it will only make further complications in the processing of the map.
We neglect -1 and convert the 1-D array into a 2-D array such that it represents the exact boundary of the environment. A star implementation:A star is one of the many available path finding algorithms which are available. The reason for choosing the A star algorithm over other algorithms is the fact that it gives quicker results as this can also consider diagonal path for movement and not only up and down movement.
Thus we will be able to attain the shortest path in the quickest time period. But to implement the A star path we need two specific points between which the path is to be generated. One point would be the exit , so the other point should be within the environment .
We should declare a point within the environment as the gathering point in the event of any catastrophic event such as Fire. Thus when we implement the A star algorithm we will obtain the safest and shortest path to the exit.FUTURE ENHANCEMENTS:There is a lot of opportunity for further enhancements in the project such as:Identifying sensor which would differentiate between obstacle fire and humans trappedOnce able to differentiate we could also be able to bring up markers in the rviz indicating the environment as it is to people who are viewing it from outside.
Make the drone autonomous , that is once introduced to a new environment it must be able to move on its own and perform SLAM on its own. Final Comparison of Results-Initially we thought the quad-rotor as a global path planner such that it can sense the environment and output us a safe path to and fro the safe zone and exit and we were successful in achieving this goal.We were also successful in getting the unknown environment of the gazebo world on rviz along with humans and fire zones shown on the rviz with robots knowledge of them.
We tried implementing a camera over the drone and tried to implement image recognition so as to differentiate between humans and other obstacles but we couldnt achieve it, although we didnt mention implementation of image recognition in our fundamental milestones. Instead we used basic lists for humans and heat zones and used programming algorithms to get the desired output.We wanted to have an algorithm and distinction system for the robot so as to make it more cognizant of its atmosphere and to predict the probability of danger to every human in its vision but we due to nonavailability of time we couldnt achieve this goal.
Over-all if we look towards the progress of this project we can see that almost all the fundamental goals of this project are covered in the course of this project.