<?xml version='1.0' encoding='utf-8' ?>



<rss version="2.0"
      xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:atom="http://www.w3.org/2005/Atom">
   <channel>
     <title><![CDATA[NUST Institutions Library Catalogue Search for 'an:&quot;119733&quot;']]></title>
     <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?q=ccl=an%3A%22119733%22&amp;format=rss</link>
     <atom:link rel="self" type="application/rss+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?q=ccl=an%3A%22119733%22&amp;sort_by=relevance_dsc&amp;format=atom"/>
     <description><![CDATA[ Search results for 'an:&quot;119733&quot;' at NUST Institutions Library Catalogue]]></description>
     <opensearch:totalResults>14</opensearch:totalResults>
     <opensearch:startIndex>0</opensearch:startIndex>
     
       <opensearch:itemsPerPage>50</opensearch:itemsPerPage>
     
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Immersive VR based Digital Twinning for Mobile Robotic Platforms /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607348</link>
        
       <description><![CDATA[









	   <p>By Malik, Muhammad Faiq. 
	   
                        . 47p.
                        
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607348">Place Hold on <em>Immersive VR based Digital Twinning for Mobile Robotic Platforms /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607348</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Intelligent Environment Monitoring and Control /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607354</link>
        
       <description><![CDATA[









	   <p>By Faiz, Muhammad Faizan . 
	   
                        . 73p.
                        
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607354">Place Hold on <em>Intelligent Environment Monitoring and Control /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607354</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Comparison of Quantitative Proxemics to measure trust in HRI /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607358</link>
        
       <description><![CDATA[









	   <p>By Ahmad, Fatima . 
	   
                        . 45p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607358">Place Hold on <em>Comparison of Quantitative Proxemics to measure trust in HRI /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607358</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Socio- Technical System for Effective Classroom Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607360</link>
        
       <description><![CDATA[









	   <p>By Kainat.. 
	   
                        . 61p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607360">Place Hold on <em>Socio- Technical System for Effective Classroom Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607360</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Human Robot Interaction- Personality Prediction of a Human Using Humanoid Robot /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607364</link>
        
       <description><![CDATA[









	   <p>By Jaffar,Anum . 
	   
                        . 67p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607364">Place Hold on <em>Human Robot Interaction- Personality Prediction of a Human Using Humanoid Robot /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607364</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Ball Interception in Humanoid Robot Soccer /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607825</link>
        
       <description><![CDATA[









	   <p>By Khan, Saman . 
	   
                        . 70p.
                        , Humanoid Robots are widely used in various real-world applications which contributed in
raising the interest of researchers to enhance Robot development and deployment in dynamic
environments. Robocup Soccer league is a platform for qualitative development in the fields of
Robot Kinematics and Dynamics, Motion Planning and Control, Navigation, Computer Vision and
Machine Learning. New challenges are introduced each year for teams that aim to improve robot
behaviors in real world scenarios. One of the most popular leagues in Robocup is Standard
Platform League that majorly focuses on improving individual robot capabilities, team capabilities
and coordinated strategies in dynamic soccer matches played by teams of Aldebaran Nao
humanoid robots. The robots with capability to quickly gain ball possession from opponent by
intercepting the moving ball path will contribute to a better defense strategy. The aim of this study
is to improve individual robot and team skills during game scenarios where ball is moving using
motion primitives and geometric parameters. The proposed strategy is used to compute the
interception point using geometric equations given the ball position and velocity information. The
robot response is dependent on factors including initial position and velocities of robot and ball.
Furthermore, to demonstrate the feasibility of proposed approach, different scenarios with single
and multi-agents are simulated using SimRobot. Simulations validate the effectiveness of proposed
algorithm in moving ball interception scenario.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607825">Place Hold on <em>Ball Interception in Humanoid Robot Soccer /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607825</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    An Unreal Engine Based Human Robot Interaction Framework /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607826</link>
        
       <description><![CDATA[









	   <p>By Asif, Muhammad Hassaan . 
	   
                        . 64p.
                        , Existing frameworks lack the support for testing Human Robot Interaction (HRI) research
which in turn often has to be tested practically making it time consuming and expensive. To
overcome this issue, a HRI framework based on Unreal Engine is proposed which consists of a
virtual Nao or Pepper robot along with virtual humans with verbal and non-verbal behaviours in
an environment. Machine Learning (ML) algorithms along with the behaviour of the virtual
robot in response to the interaction with the virtual humans and the environment can be
programmed using a Python API which communicates with Unreal Engine C++ in real time.
Several experiments related to multiple aspects of HRI: (1) Verbal Interaction; (2) Non-Verbal
Interaction; (3) Emotional Interaction were conducted in both the virtual and real world
environments and the results were compared to validate the feasibility of the framework. A
Reinforcement Learning (RL) algorithm was also tested to further indicate the usefulness of the
framework. Through the use of a Virtual Reality (VR) headset, a human can be immersed in the
framework to interact with the robot in real time
                         30cm,. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607826">Place Hold on <em>An Unreal Engine Based Human Robot Interaction Framework /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607826</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Perception of Emotion in Human-Robot Interaction /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607900</link>
        
       <description><![CDATA[









	   <p>By Zia, Muhammad Faisal. 
	   
                        . 59p.
                        , Perception of emotion is an intuitive replication of a person’s internal state without the need for
verbal communication. Visual emotion recognition has been broadly studied and several end-toend deep neural networks (DNNs)-based and Machine learning-based models have been proposed
but they lack the ability to be implemented in low-specification devices like robots, and vehicles.
The drawbacks of conventional handcrafted feature-based Facial Emotion Recognition (FER)
methods are eliminated by DNNs-based FER approaches. In spite of that, Deep Neural Network
based FER techniques suffer from high processing costs and exorbitant memory requirements,
their application is constrained in fields like Human-Robot Interaction (HRI) and HumanComputer Interaction (HCI) and relies on hardware requirements. In aforementioned study, we
presented a computationally inexpensive and robust FER system for the perception of six basic
emotions (i.e., disgust, surprise, fear, anger, happy, and sad) that is capable of running on
embedded devices with constrained specifications. In the first step after pre-processing input
images, geometric features are extracted from detected facial landmarks, considering the facial
spatial position among influential landmarks. The extracted features are given as input to trainthe
SVM classifier. Our proposed FER system was trained and evaluated experimentally using two
databases, Karolinska Directed Emotional Faces (KDEF) and Extended Cohn-Kanade (CK+)
database. Fusion of KDEF and CK+ datasets at the training level were also employed in order to
generalize the FER system’s response to the variations of ethnicity, race, national and provincial
backgrounds. The results show that our proposed FER system is optimized for real-time embedded
applications with constrained specifications and yields an accuracy of 96.8%, 86.7% and 86.4%
for CK+, KDEF and fusion of CK+ and KDEF databases respectively. As a part of our future
research objectives, the developed system will make a robotic agent capable of perceiving emotion
and interacting naturally without the need for additional hardware during HRI.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607900">Place Hold on <em>Perception of Emotion in Human-Robot Interaction /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607900</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Examining the Effectiveness of Virtual Reality in Stress Management /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609992</link>
        
       <description><![CDATA[









	   <p>By Tahir, Sheza . 
	   
                        . 64p.
                        , Virtual Reality (VR) has emerged as a promising tool in healthcare
management, with recent studies exploring its effectiveness in addressing
various psychological and physiological disorders. Stress is prevalent in
modern society, necessitating effective strategies for its management. While
sports and extended reality (XR) gaming have shown promising effects on
mental health, this study aims to investigate the effectiveness of VR in
reducing stress by comparing conventional and VR-based relaxation
techniques using HRV parameters and EEG responses. A total of 40
participants (28 males, 12 females) with a mean age of 25 ± 3.21 years
participated in the study. Baseline recordings were obtained, followed by a
stress phase induced by a timed IQ quiz. Participants were then randomly
assigned to either VR-based relaxation or conventional relaxation
techniques. Both relaxation methods significantly improved heart rate
variability (HRV) and decreased sympathetic dominance, indicating
enhanced adaptability to stress and activation of the parasympathetic
nervous system (PNS).However, VR-based relaxation resulted in a more
pronounced decrease in heart rate and a significant reduction in the LF/HF
ratio compared to conventional relaxation, suggesting a deeper state ofxv
relaxation. Furthermore, VR-based relaxation led to a significant increase in
the alpha-to-beta ratio, indicating a calmer mental state compared to non-VR
relaxation. Notable changes were also recorded in Alpha Power in the
frontal channels and Beta Power across all channels, suggesting greater
effectiveness in inducing PNS activation and recovery.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609992">Place Hold on <em>Examining the Effectiveness of Virtual Reality in Stress Management /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609992</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Towards Automatic Weather Classification Using DCNNs /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=610838</link>
        
       <description><![CDATA[









	   <p>By Mattia Tun Nabi. 
	   
                        . 94p.
                        , The utilization of remote sensing (RS) technology has resulted in the extensive
accessibility of a significant amount of satellite image data. In order to ensure the
successful execution of the RS in real-life scenarios, it is imperative to create effective and
adaptable solutions that can be utilized across different transdisciplinary domains. Deep
Convolutional Neural Networks (CNNs) are frequently used to accomplish the goal of fast
analysis and precise categorization in RS imaging. This study introduces a unique residual
network known as ResNet101. The network comprises FC-1024 fully connected layers,
dropout layers, a thick layer, and data augmentation algorithms. To resolve the issue of
similarity between different classes, architectural enhancements are implemented. On the
other hand, imbalanced classes are dealt with by employing data augmentation techniques.
The ResNet101 model use the rigorous Large-Scale Cloud pictures Dataset for
Meteorology Research (LSCIDMR), which has 10 classes and a multitude of highresolution photos. The goal of the model is to precisely classify these photos into their
respective categories. The model we have created outperforms numerous previously
published deep learning algorithms in terms of Precision, Accuracy, and F1 scores. The
accuracy reaches up to 99% and approximately 92%, respectively.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=610838">Place Hold on <em>Towards Automatic Weather Classification Using DCNNs /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=610838</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Detection and Recognition of Medicine Packaging and  Information Using Deep Learning and Computer Vision  /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=612181</link>
        
       <description><![CDATA[









	   <p>By Rukhsar, Muhammad Khurram . 
	   
                        . 101p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=612181">Place Hold on <em>Detection and Recognition of Medicine Packaging and  Information Using Deep Learning and Computer Vision  /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=612181</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Breast Cancer Detection through the Introduction of Computer-Aided Diagnosis (CAD) /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=612332</link>
        
       <description><![CDATA[









	   <p>By  Saleem ,Nada. 
	   
                        . 97p ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=612332">Place Hold on <em>Breast Cancer Detection through the Introduction of Computer-Aided Diagnosis (CAD) /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=612332</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Novel Hybrid Neural Network Architecture For Multi-modal Brain Tumor mpMRI Segmentation /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=613225</link>
        
       <description><![CDATA[









	   <p>By Faizan, Muhammad . 
	   
                        . 73p.
                        , Medical image segmentation is a critical step in clinical decision-making, enabling
precise localization of anatomical structures and lesions. While Convolutional Neural Networks, particularly U-shaped architectures like U-Net, have been popular in
this domain, their limited receptive fields hinder the accurate delineation of anomalies with irregular shapes and sizes. Hybrid approaches integrating convolution and
vision transformers Vision Transformers (ViTs) have demonstrated improved performance due to their ability to capture dependencies over an extended length. However, ViTs are computationally expensive, particularly for volumetric image segmentation, such as MRI, making them challenging to deploy on hardware with limited
resources. To address these challenges, recent studies have revisited convolutional
architectures, leveraging large kernel (LK) depth-wise convolution to emulate the hierarchical transformer’s behavior. Building on this direction, we propose 3D SegUXNet, a novel U-shaped encoder-decoder architecture for volumetric biomedical image
segmentation. Our model introduces the SegUX block, which combines large kernel
depth-wise and point-wise convolutions to enhance the receptive field while maintaining computational efficiency. The addition of a residual block further refines features,
improving model robustness and generalization. Empirical results demonstrate that
3D SegUX-Net consistently outperforms state-of-the-art CNN and transformer methods on multiple benchmarks, including BraTS 2019, BraTS 2020, BraTS 2023, and
organ segmentation of BTCV dataset. The proposed architecture establishes new
SOTA performance in volumetric medical semantic segmentation, combining simplicity, efficiency, and scalability.
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=613225">Place Hold on <em>Novel Hybrid Neural Network Architecture For Multi-modal Brain Tumor mpMRI Segmentation /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=613225</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Automated Karyotyping: Segmentation and Classification /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=615311</link>
        
       <description><![CDATA[









	   <p>By Umbreen, Neelam. 
	   
                        . 130p.
                        , Karyotyping continues to be the bedrock of cytogenetic diagnosis, providing key information
on chromosomal abnormalities causative of a broad range of genetic disorders, developmental
abnormalities, and cancers. But standard karyotyping is time-consuming, requires extensive
specialist interpretation, and is vulnerable to human mistake and inefficiency, especially in
high-volume clinical settings. Despite advances in medical image analysis, the automation of
karyotyping faces persistent challenges, including the lack of large-scale annotated datasets,
difficulties in segmenting overlapping chromosomes, and variability in chromosome
morphology and staining. These challenges define a significant research gap in developing
scalable, accurate, and clinically deployable deep learning models for automated chromosome
analysis.
In order to overcome this gap, we first present a large-scale, clinically annotated cytogenetic
database, built from 1,311 patients and consisting of 10,057 karyograms with 514,949
manually annotated chromosome singlets. Furthermore, 3,935 metaphase images are annotated
at the instance level in COCO format. This data set reflects true-world diversity, ranging from
normal and abnormal karyotypes to different Giemsa (G-banding) staining intensities,
structural abnormalities, and overlapping difficult cases, and thus presents a solid basis for the
creation and testing of deep learning models within automated cytogenetics.
Based on this work, we create two primary methodologies aimed at the fundamental tasks of
karyotyping. For segmentation of chromosomes, we introduce a variant Mask R-CNN model
involving an Attention-based Feature Pyramid Network (AttFPN), spatial attention, and a
LastLevelMaxPool component to improve multi-scale feature representation and contextual
perception. It enhances performance in difficult situations, including overlapping
chromosomes and weak banding patterns, and gains considerable improvements in mean
Average Precision (mAP) compared to standard baselines.
For chromosome classification, we present the Dual Attention Multiscale Pyramid Network
(DAMP), a specifically designed model that combines channel and spatial attention
mechanisms to concentrate on discriminative features, as well as a multiscale pyramid
architecture to cope with size, orientation, and quality variation in chromosomes. DAMP's
highest classification accuracy is 96.76% on both public and commercial datasets, performingxv
better than state-of-the-art models like ResNet-50, Vision Transformers, and Siamese
Networks.
Overall, this thesis provides interpretable and scalable deep learning models for automating
chromosome classification and segmentation. Through the closure of key gaps in dataset
quality, model resilience, and clinical utility, the work facilitates the insertion of clever
decision-support systems into cytogenetic pipelines, ultimately leading to improved diagnostic
reliability and efficiency in the face of chromosomal disorders.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=615311">Place Hold on <em>Automated Karyotyping: Segmentation and Classification /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=615311</guid>
     </item>
	 
   </channel>
</rss>





