VisionTool: A Deep Learning Embedded Device for Blind People

Rakhi Bharadwaj, Bhushan Bachewar, Shreya Barsude, Harsh Badagandi, Snehal Darade
Journal of Information Technology Education: Innovations in Practice  •  Volume 24  •  2025  •  pp. 024

This paper presents a proof-of-concept device designed to create an affordable, real-time vision assistance tool for visually impaired individuals. The device detects obstacles, estimates their distance, provides an audio alert, and informs caretakers of the user’s live location. It addresses the lack of portable, low-cost, and connected navigation aids for blind users.

This paper addresses the problem by combining computer vision, distance estimation, audio alerts, and real-time caregiver notification through a mobile app, offering a smart solution to enhance safety and independence for visually impaired individuals.

The system uses a Raspberry Pi with a camera module running a YOLOv8 object detection model for real-time object recognition, an ultrasonic sensor (or camera depth estimation) for distance measurement, and a Flutter-based mobile app to send live GPS coordinates to caretakers. Testing was done in controlled indoor and outdoor environments.

This project contributes a multi-functional assistive tool that not only alerts visually impaired users about nearby obstacles with distance information but also keeps caretakers informed, combining safety, mobility, and communication in a single solution.

The paper’s major findings demonstrate the achievement of real-time object detection with approximately 20-25% mean average precision (mAP) and reliable distance measurements. Audio feedback about both object type and distance significantly improved user navigation. The Flutter app successfully updated caretakers with the user’s live location, enhancing safety. The system is portable, battery-operated, and user-friendly.

Calibrate distance sensors carefully for different environments (indoor vs outdoor). Customize audio feedback to include critical distances (e.g., warn more urgently when obstacles are very close). Ensure Flutter app notifications are lightweight and battery-optimized.

Explore integrating AI-based path planning alongside object detection. Investigate methods to improve GPS accuracy in indoor environments. Research multi-sensor fusion (camera + ultrasonic + inertial sensors) for better obstacle detection.

This tool not only increases mobility for visually impaired individuals but also reassures families and caretakers by providing live location tracking, leading to greater independence, confidence, and peace of mind for users and their loved ones.

Future work could involve adding features such as automatic emergency alerts if the user remains stationary for too long, voice-command-based control of the device, and integrating AI-driven adaptive navigation suggestions based on user movement patterns. Additionally, adding face recognition helps individuals recognize the person in front of them.

vision assistance, object detection, Raspberry Pi, GPS tracking, SSDlite MobileNet V2 model, Flutter app
28 total downloads
Share this
 Back

Back to Top ↑