Self-Supervised Monocular Depth Estimation in Complex Dynamic Environments /
Material type:
Text SMME- NUST; 2026Description: 59p. ; 30cmSubject(s): MS R & AIDDC classification: 629.8 Online resources: Click here to access online
| Item type | Current location | Home library | Shelving location | Call number | Status | Date due | Barcode | Item holds |
|---|---|---|---|---|---|---|---|---|
| Soft Copy | School of Mechanical & Manufacturing Engineering (SMME) | School of Mechanical & Manufacturing Engineering (SMME) | Thesis | 629.8 (Browse shelf) | Available | SMME-TH-1214 |
Browsing School of Mechanical & Manufacturing Engineering (SMME) shelves, Shelving location: Thesis Close shelf browser
Monocular RGB image-based depth estimation plays an important role for autonomous
driving, 3D reconstruction, robotics, and augmented reality/virtual reality. Self-supervised
monocular depth estimation methods have recently performed impressively in scenes
containing static objects primarily based on the assumption that scenes are consistent when
viewed from different frames. Violations of this occur with moving objects and occlusions,
leading to poor performance in depth accuracy in dynamic scenes and blurry object
boundaries due to exclusion of dynamic areas from the training data. To mitigate such
issues, we offer a self-supervised monocular depth estimation network using channelattention module that incorporates external pre-trained depth estimation models (pseudodepth) into its loss functions and a guided channel-attention mechanism in the decoder of
the depth estimation network. These additions enabled our model to accurately estimate
dynamic objects’ depth with clear boundaries when trained on highly dynamic video
scenes. We tested this approach on the BONN, KITTI, and NYUv2 datasets which contain
both static and highly dynamic scenes. Results indicate that our approach performs
competitively with prior approaches.

There are no comments on this title.