<?xml version="1.0" encoding="UTF-8"?>
<mods xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" version="3.1" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-1.xsd">
  <titleInfo>
    <title>Self-Supervised Monocular Depth Estimation in Complex Dynamic Environments</title>
  </titleInfo>
  <name type="personal">
    <namePart>Ubaid, Bilal</namePart>
    <role>
      <roleTerm authority="marcrelator" type="text">creator</roleTerm>
    </role>
  </name>
  <name type="personal">
    <namePart>Supervisor: Dr. Shahbaz Khan</namePart>
  </name>
  <typeOfResource>text</typeOfResource>
  <originInfo>
    <issuance>monographic</issuance>
  </originInfo>
  <physicalDescription>
    <extent>59p. ; 30cm,</extent>
  </physicalDescription>
  <note>Monocular RGB image-based depth estimation plays an important role for autonomous 
driving, 3D reconstruction, robotics, and augmented reality/virtual reality. Self-supervised 
monocular depth estimation methods have recently performed impressively in scenes 
containing static objects primarily based on the assumption that scenes are consistent when 
viewed from different frames. Violations of this occur with moving objects and occlusions, 
leading to poor performance in depth accuracy in dynamic scenes and blurry object 
boundaries due to exclusion of dynamic areas from the training data. To mitigate such 
issues, we offer a self-supervised monocular depth estimation network using channelattention module that incorporates external pre-trained depth estimation models (pseudodepth) into its loss functions and a guided channel-attention mechanism in the decoder of 
the depth estimation network. These additions enabled our model to accurately estimate 
dynamic objects’ depth with clear boundaries when trained on highly dynamic video 
scenes. We tested this approach on the BONN, KITTI, and NYUv2 datasets which contain 
both static and highly dynamic scenes. Results indicate that our approach performs 
competitively with prior approaches. 
</note>
  <subject>
    <topic>MS R &amp; AI</topic>
  </subject>
  <classification authority="ddc">629.8</classification>
  <identifier type="uri"> http://10.250.8.41:8080/xmlui/handle/123456789/57587</identifier>
  <location>
    <url> http://10.250.8.41:8080/xmlui/handle/123456789/57587</url>
  </location>
  <recordInfo/>
</mods>
