Computer Vision
Computer vision based rescue robot
Athulya Menon
Whenever a crisis occurs, one of the most important tasks that disaster forces have is rescuing trapped humans. This holds true especially in case of natural disasters such as earthquakes and volcano eruptions, and also when it comes to large outbreaks of fire. Ensuring that victims are rescued is a number one priority in such cases, as being exposed to toxic fumes for too long can lead to death. So how do we develop a full-proof method to find victims in such cases? Surely, there are a lot of restrictions for humans when it comes to such adverse situations. Thanks to advancements in science and technology, we can now use autonomous vehicles and robots to go where humans can't and to rescue victims in situations that were previously impossible to us. In this article, we will be taking a look at one such project, where we will attempt to build a rescue robot using the principle of computer vision.
Read more..
Looking to build project on Surveillance Robot?:
Skyfi Labs gives you the easiest way to learn and build this project.
- Surveillance Robot Kit will be shipped to you (anywhere in the world!)
- Use high quality videos to understand concepts and build the project
- Get 1 to 1 expert assistance from Skyfi Labs engineers while doing the project
- Earn a smart certificate on finishing the project
You can start for free and pay only if you like it!
Project Description
Assessing whether there are any more survivors is an integral part of crisis management. However, in several cases, we cannot enter the danger zone ourselves to make such an assessment. Therefore, we take the help of mobile robots to search for victims within such danger zones. Most of these robots rely on object detection systems to detect the presence of survivors. In this project, we will be attempting to build a system that will help identify survivors in such accidents and classify them, thereby helping rescue operators save more lives.
Concepts Used
- Basic Mechanics
- Fundamentals of Programming
- OpenCV basics
- Image Processing
- Data segmentation
- Object Detection
- Edge Detection
- Sensor Technology
Project Implementation
- Such systems rely on multiple physical parameters to detect the presence of a victim. These include voice, temperature, motion and face detection.
- Audio-based detection requires the position of multiple microphones on the robot body.
- The phase shift incurred between the signals sent out by these phones will help us in identifying how far away the victim is located. This may be done through Cross power Spectrum Phasing which utilizes concepts of Fourier transformation to gauge phase differences. However, a major drawback of this method is that it is not very accurate when there is a lot of ambient noise, and also there will be circumstances when the victim will not be in a position to raise an alarm or create sounds.
- Infrared cameras allow such robots to track victims by making use of temperature-related data and is a good option when it comes to night-time rescue activities. However, they are very affected by external factors such as ash, humidity, and dust, making them an unreliable source of data in times of emergencies.
Did you know
Skyfi Labs helps students learn practical skills by building real-world projects.
You can enrol with friends and receive kits at your doorstep
You can learn from experts, build working projects, showcase skills to the world and grab the best jobs.
Get started today!
- Sonar, laser, visual and cameras can be used to accumulate visual data, which can then be analysed to detect victims through motion detection.
- In this project, we will be making use of the Viola-Jones Detector which is a computer vision method that relies on the principles of learning-based object detection.
- In this method, we take a feed from the camera, and first convert it to grayscale.
- Then we normalise the pixel values and filter the image to reduce noise and other such disturbances.
- Then we create integral images, which are essentially image elements in which the sum of pixel values is constantly added and stored in the top-left corner, thus giving arbitrary rectangular areas.
- During training, classifiers constantly compare the values stored in these rectangular areas.
- Any sudden changes in values result in object detection.
- The system further classifies the images using feature extraction principles.
- Haar-features detection may also be used to detect human presence. This works by the continued subtraction of average dark and light-region pixel values. A threshold value is set, and every time the subtraction yields a value higher than the threshold a flag is raised.
- Over time, this module can be trained to become more accurate by training it using sample datasets of human faces.
Latest projects on Computer Vision
Want to develop practical skills on Computer Vision? Checkout our latest projects and start learning for free
Kit required to develop Computer vision based rescue robot:
Technologies you will learn by working on Computer vision based rescue robot:
Computer vision based rescue robot
Skyfi Labs
•
Published:
2019-11-12 •
Last Updated:
2021-05-14