000 01744nam a22001577a 4500
082 _a629.8
100 _aUbaid, Bilal
_9134337
245 _aSelf-Supervised Monocular Depth Estimation in Complex Dynamic Environments /
264 _bSMME- NUST;
_c2026.
300 _a59p. ;
_c30cm,
500 _aMonocular RGB image-based depth estimation plays an important role for autonomous driving, 3D reconstruction, robotics, and augmented reality/virtual reality. Self-supervised monocular depth estimation methods have recently performed impressively in scenes containing static objects primarily based on the assumption that scenes are consistent when viewed from different frames. Violations of this occur with moving objects and occlusions, leading to poor performance in depth accuracy in dynamic scenes and blurry object boundaries due to exclusion of dynamic areas from the training data. To mitigate such issues, we offer a self-supervised monocular depth estimation network using channelattention module that incorporates external pre-trained depth estimation models (pseudodepth) into its loss functions and a guided channel-attention mechanism in the decoder of the depth estimation network. These additions enabled our model to accurately estimate dynamic objects’ depth with clear boundaries when trained on highly dynamic video scenes. We tested this approach on the BONN, KITTI, and NYUv2 datasets which contain both static and highly dynamic scenes. Results indicate that our approach performs competitively with prior approaches.
650 _aMS R & AI
_9134338
700 _aSupervisor: Dr. Shahbaz Khan
_9125085
856 _u http://10.250.8.41:8080/xmlui/handle/123456789/57587
942 _2ddc
_cSC
999 _c617317
_d617317