About the Project
This project is about the development of a navigation system for the visually impaired in pedestrian navigation through object detection. It is a Python based program which makes use of the OpenCV library and Haar Cascade detection algorithms to detect the position of lanes, pedestrians and other objects. Data gathered from the detection of these objects will result in audio output instructions which will help guide and inform users in pedestrian navigation.
Features include: Tactile Paving Detection, Facial and Full body Detection, Adaptive Brightness, and Audio Output Instructions. Through this project I was able to the strengths and limitations of using OpenCV and Haar Cascade in object detection. For example, OpenCV refreshes the analysation of the digital video every second which allows us to average the results, ruling out any outliers which might result occasionally. There are instances where pedestrians are unable to be detected either because they are too close to the camera or if they are moving too quickly.
One of the possible ways to improve the program is to use machine learning to detect pedestrians, tactile paving and perhaps other objects such as: bicycles, stop lights, cars, etc. This will improve the accuracy of the program significantly and result in a safer and much more trustworthy product. A thermal camera or TensorFlow can also be used for detecting pedestrians and other objects with higher accuracy. Another addition to the program might be an audio input feature which links up to Google Maps can also be added so that users may input their desired locations and be directed towards them with audio instructions. The product can also not be limited to pedestrian navigation. Indoor navigation may be possible with the use of Bluetooth beacons, GPS, or indoor Wi-Fi signals.
In this work, Farrell was creating a navigation system that meant to help the visually impaired in pedestrian navigation through object detection
OpenCV nonvisual navigation system for blind people