<?xml version="1.0" encoding="UTF-8"?>
<record
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"
    xmlns="http://www.loc.gov/MARC21/slim">

  <leader>02297nam a22001577a 4500</leader>
  <datafield tag="082" ind1=" " ind2=" ">
    <subfield code="a">610</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Mashal, Abdul Hameed </subfield>
    <subfield code="9">130596</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Design and development of a Wearable Assistive device for Individuals with Vision Loss /</subfield>
    <subfield code="c">Abdul Hameed Mashal</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2=" ">
    <subfield code="a">Islamabad : </subfield>
    <subfield code="b">SMME- NUST; </subfield>
    <subfield code="c">2025.</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
    <subfield code="a">75p.</subfield>
    <subfield code="b">Soft Copy</subfield>
    <subfield code="c">30cm</subfield>
  </datafield>
  <datafield tag="500" ind1=" " ind2=" ">
    <subfield code="a">This thesis presents the design and implementation of an intelligent, multi-functional visual
aid on a low-cost embedded platform to help visually impaired people become more aware
of their surroundings and more independent. A lot of the time, traditional assistance devices
are too expensive, don't do enough, or take up too much computing power. To overcome
these constraints, this study creates a portable system utilizing a Raspberry Pi 5 and a
conventional camera module, proficient in executing four essential functions in real-time:
object detection, facial recognition, monocular depth-based obstacle avoidance, and
currency identification.
The core innovation of this work is a computationally efficient, two-stage processing
pipeline. The lightweight primary object detector YOLOv8 initially looks at the scene to
give it some general context. Based on this first analysis, the system smartly sends out
specialised secondary models only when they are needed. For example, it might turn on a
facial recognition module when it sees a "person" or a MiDaS-based depth estimate model
when a huge, nearby object could be an obstacle. This context-aware method cuts down on
the amount of processing power needed by a lot, making it possible for edge hardware to
work in real time.
The system gives the user clear audio announcements through a text-to-speech engine that
are easy to understand and act on. The final implementation shows that a modular,
intelligently layered AI architecture can provide a flexible, high-performance, and lowcost assistive solution that effectively connects complex computer vision capabilities with
practical, real-world use for people who are blind or have low vision. </subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2=" ">
    <subfield code="a">MS Biomedical Engineering (BME)      </subfield>
    <subfield code="9">119509</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Supervisor : Dr. Muhammad Nabeel Anwar</subfield>
    <subfield code="9">119573</subfield>
  </datafield>
  <datafield tag="856" ind1=" " ind2=" ">
    <subfield code="u">http://10.250.8.41:8080/xmlui/handle/123456789/54816</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
    <subfield code="2">ddc</subfield>
    <subfield code="c">THE</subfield>
  </datafield>
  <datafield tag="999" ind1=" " ind2=" ">
    <subfield code="c">614788</subfield>
    <subfield code="d">614788</subfield>
  </datafield>
  <datafield tag="952" ind1=" " ind2=" ">
    <subfield code="0">0</subfield>
    <subfield code="1">0</subfield>
    <subfield code="4">0</subfield>
    <subfield code="7">0</subfield>
    <subfield code="a">SMME</subfield>
    <subfield code="b">SMME</subfield>
    <subfield code="c">EB</subfield>
    <subfield code="d">2025-09-23</subfield>
    <subfield code="l">0</subfield>
    <subfield code="o">610</subfield>
    <subfield code="p">SMME-TH-1163</subfield>
    <subfield code="r">2025-09-23</subfield>
    <subfield code="w">2025-09-23</subfield>
    <subfield code="y">THE</subfield>
  </datafield>
</record>
