Design and development of a Wearable Assistive device for Individuals with Vision Loss /
Abdul Hameed Mashal
- 75p. Soft Copy 30cm
This thesis presents the design and implementation of an intelligent, multi-functional visual aid on a low-cost embedded platform to help visually impaired people become more aware of their surroundings and more independent. A lot of the time, traditional assistance devices are too expensive, don't do enough, or take up too much computing power. To overcome these constraints, this study creates a portable system utilizing a Raspberry Pi 5 and a conventional camera module, proficient in executing four essential functions in real-time: object detection, facial recognition, monocular depth-based obstacle avoidance, and currency identification. The core innovation of this work is a computationally efficient, two-stage processing pipeline. The lightweight primary object detector YOLOv8 initially looks at the scene to give it some general context. Based on this first analysis, the system smartly sends out specialised secondary models only when they are needed. For example, it might turn on a facial recognition module when it sees a "person" or a MiDaS-based depth estimate model when a huge, nearby object could be an obstacle. This context-aware method cuts down on the amount of processing power needed by a lot, making it possible for edge hardware to work in real time. The system gives the user clear audio announcements through a text-to-speech engine that are easy to understand and act on. The final implementation shows that a modular, intelligently layered AI architecture can provide a flexible, high-performance, and lowcost assistive solution that effectively connects complex computer vision capabilities with practical, real-world use for people who are blind or have low vision.