<?xml version="1.0" encoding="UTF-8"?>
<record
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"
    xmlns="http://www.loc.gov/MARC21/slim">

  <leader>02615nam a22001577a 4500</leader>
  <datafield tag="082" ind1=" " ind2=" ">
    <subfield code="a">629.8</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Hassan, Salman </subfield>
    <subfield code="9">119999</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Pose-Based Seamless Video Stitching for Real World Applications /</subfield>
    <subfield code="c">Salman Hassan</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2=" ">
    <subfield code="a">Islamabad : </subfield>
    <subfield code="b">SMME- NUST; </subfield>
    <subfield code="c">2023.</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
    <subfield code="a">70p.</subfield>
    <subfield code="b">Soft Copy</subfield>
    <subfield code="c">30cm</subfield>
  </datafield>
  <datafield tag="500" ind1=" " ind2=" ">
    <subfield code="a">Combining videos of humans performing different gestures in a smooth way can potentially
have many uses across a wide range of fields. These include entertainment, virtual reality,
robotics, education, &amp; communication. The goal of this research work is set in this context.
This research focuses on developing a system that takes individual videos of humans
performing motion gestures, and stitches them in a way that minimizes spatial discontinuities
between upper torso joints, thus joining two or more human gestures into one seamless
continuous motion. It begins by investigating &amp; comparing current frameworks used to stitch
individual human motion gestures and investigates the theoretical and mathematical
approaches behind them, proceeding in a step-by-step way. First, it collects sign videos for
most commonly used English sentences of lengths 2-8. Then, it preprocesses these videos to
convert them into a standardized form. Following that, it extracts landmarks to prune
unnecessary parts of videos. It then calculates human joint coordinates using pose estimation.
After that it calculates link vectors and human shoulder, and elbow angles using linear
algebra. Following that, the system interpolates joint coordinates at transitions between signs
and uses them to calculate interpolated joint angles. Concurrently, actual joint coordinates are
used to calculate actual joint angles, which are then used to calculate wrist poses using
forward kinematics. These wrist poses are compared with the same obtained from feeding
interpolated joint angles to forward kinematic models. An ablation study was then conducted
that measured mean errors across different combinations of spline degree, percentage of
knots, &amp; length of sentences. LSQ Univariate Spline with degree 4, knots percentage of 90%,
and sentence length of 4 produced least mean error. Transition errors (errors between sign
transitions were also calculated &amp; recorded for each of 100 sentences. In this way,
smoothness of different interpolating functions was quantified</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2=" ">
    <subfield code="a">MS Robotics and Intelligent Machine Engineering   </subfield>
    <subfield code="9">119486</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">Supervisor :  Dr. Karam Dad Kallu</subfield>
    <subfield code="9">119537</subfield>
  </datafield>
  <datafield tag="856" ind1=" " ind2=" ">
    <subfield code="u">http://10.250.8.41:8080/xmlui/handle/123456789/32416</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
    <subfield code="2">ddc</subfield>
    <subfield code="c">THE</subfield>
  </datafield>
  <datafield tag="999" ind1=" " ind2=" ">
    <subfield code="c">607467</subfield>
    <subfield code="d">607467</subfield>
  </datafield>
  <datafield tag="952" ind1=" " ind2=" ">
    <subfield code="0">0</subfield>
    <subfield code="1">0</subfield>
    <subfield code="4">0</subfield>
    <subfield code="7">0</subfield>
    <subfield code="a">SMME</subfield>
    <subfield code="b">SMME</subfield>
    <subfield code="c">EB</subfield>
    <subfield code="d">2024-01-22</subfield>
    <subfield code="l">0</subfield>
    <subfield code="o">629.8</subfield>
    <subfield code="p">SMME-TH-829</subfield>
    <subfield code="r">2024-01-22</subfield>
    <subfield code="w">2024-01-22</subfield>
    <subfield code="y">THE</subfield>
  </datafield>
</record>
