A Deep Learning Based Model to Assist Blind People in Their Navigation

Nitin Kumar, Anuj jain
Journal of Information Technology Education: Innovations in Practice  •  Volume 21  •  2022  •  pp. 095-114

This paper proposes a new approach to developing a deep learning-based prototyping wearable model which can assist blind and visually disabled people to recognize their environments and navigate through them. As a result, visually impaired people will be able to manage day-to-day activities and navigate through the world around them more easily.

In recent decades, the development of navigational devices has posed challenges for researchers to design smart guidance systems for visually impaired and blind individuals in navigating through known or unknown environments. Efforts need to be made to analyze the existing research from a historical perspective. Early studies of electronic travel aids should be integrated with the use of assistive technology-based artificial vision models for visually impaired persons.

This paper is an advancement of our previous research work, where we performed a sensor-based navigation system. In this research, the navigation of the visually disabled person is carried out with a vision-based 3D-designed wearable model and a vision-based smart stick. The wearable model used a neural network-based You Only Look Once (YOLO) algorithm to detect the course of the navigational path which is augmented by a GPS-based smart Stick. Over 100 images of each of the three classes, namely straight path, left path and right path, are being trained using supervised learning. The model accurately predicts a straight path with 79% mean average precision (mAP), the right path with 83% mAP, and the left path with 85% mAP. The average accuracy of the wearable model is 82.33% and that of the smart stick is 96.14% which combined gives an overall accuracy of 89.24%.

This research contributes to the design of a low-cost navigational standalone system that will be handy to use and help people to navigate safely in real-time scenarios. The challenging self-built dataset of various paths is generated and transfer learning is performed on the YOLO-v5 model after augmentation and manual annotation. To analyze and evaluate the model, various metrics, such as model losses, recall value, precision, and maP, are used.

These were the main findings of the study:
• To detect objects, the deep learning model uses a higher version of YOLO, i.e., a YOLOv5 detector, that may help those with visual im-pairments to improve their quality of navigational mobilities in known or unknown environments.
• The developed standalone model has an option to be integrated into any other assistive applications like Electronic Travel Aids (ETAs)
• It is the single neural network technology that allows the model to achieve high levels of detection accuracy of around 0.823 mAP with a custom dataset as compared to 0.895 with the COCO dataset. Due to its lightning-speed of 45 FPS object detection technology, it has become popular.

Practitioners can help the model’s efficiency by increasing the sample size and classes used in training the model.

To detect objects in an image or live cam, there are various algorithms, e.g., R-CNN, Retina Net, Single Shot Detector (SSD), YOLO. Researchers can choose to use the YOLO version owing to its superior performance. Moreover, one of the YOLO versions, YOLOv5, outperforms its other versions such as YOLOv3 and YOLOv4 in terms of speed and accuracy.

We discuss new low-cost technologies that enable visually impaired people to navigate effectively in indoor environments.

The future of deep learning could incorporate recurrent neural networks on a larger set of data with special AI-based processors to avoid latency.

visually impaired, handheld assistive technology, assistive technology, wearable devices, blind, navigation, object detection
312 total downloads
Share this
 Back

Back to Top ↑