<?xml version='1.0' encoding='utf-8' ?>



<rss version="2.0"
      xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:atom="http://www.w3.org/2005/Atom">
   <channel>
     <title><![CDATA[NUST Institutions Library Catalogue Search for 'an:&quot;119662&quot;']]></title>
     <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?q=ccl=an%3A%22119662%22&amp;format=rss</link>
     <atom:link rel="self" type="application/rss+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?q=ccl=an%3A%22119662%22&amp;sort_by=relevance_dsc&amp;format=atom"/>
     <description><![CDATA[ Search results for 'an:&quot;119662&quot;' at NUST Institutions Library Catalogue]]></description>
     <opensearch:totalResults>19</opensearch:totalResults>
     <opensearch:startIndex>0</opensearch:startIndex>
     
       <opensearch:itemsPerPage>50</opensearch:itemsPerPage>
     
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Automated Segmentation and Classification of Lesion On Breast Ultrasound






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=524396</link>
        
       <description><![CDATA[









	   <p>By Zahra Fazal. 
	   Islamabad SMME NUST 2014
                        . 56 p.
                        
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=524396">Place Hold on <em>Automated Segmentation and Classification of Lesion On Breast Ultrasound</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=524396</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    White Matter Multiple Sclerosis Lesion Segmentation Under Distributional Shifts /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607294</link>
        
       <description><![CDATA[









	   <p>By Haider, Ali . 
	   
                        . 37p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607294">Place Hold on <em>White Matter Multiple Sclerosis Lesion Segmentation Under Distributional Shifts /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607294</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Vehicle Detection and Tracking in Aerial Imagery /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607308</link>
        
       <description><![CDATA[









	   <p>By Tahir, Muhammd Abdullah . 
	   
                        . 42p.
                        
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607308">Place Hold on <em>Vehicle Detection and Tracking in Aerial Imagery /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607308</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Deploying Efficient Net (BNs) for grading Diabetic Retinopathy severity levels from fundus images /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607466</link>
        
       <description><![CDATA[









	   <p>By Batool, Summiya . 
	   
                        . 40p.
                        , One of the common and escalating endocrine illnesses is diabetes mellitus. Diabetic
retinopathy is a common eye problem in patients with diabetes. Diabetic retinopathy
(DR), a retinal condition, is acknowledged as an epidemic on a global scale. One-third of
the estimated 285 million persons with diabetes show symptoms of DR, and one-third of
them have DR that threatens their vision [1]. In addition, the figures are rising. By 2040,
288 million individuals are expected to have AMD, and by 2050, the number of people
with DR is projected to treble. The need for reliable diabetic retinopathy screening
systems became a critical issue recently due to the increase in the number of diabetic
patients. The severity of DR can be graded into five stages: normal, mild NPDR, moderate
NPDR, severe NPDR, and PDR. Early diagnosis and treatment of DR can be
accomplished by organizing large regular screening programs. Numerous Convolutional
neural network models are developed for the diagnosis of DR in fundus images using
deep learning methods. In Deep Learning (DL) one of the methods is a computer-aided
medical diagnosis for the detection of DR. There are many DL-based methods such as
restricted Boltzmann Machines, convolutional neural networks (CNNs), auto-encoder,
and sparse coding. On the other hand, it is thought-provoking to distinguish it initially not
display signs in the initial classes. The current models for diabetic retinopathy mayn’t
identify entire classes of DR. The utmost commonly used metrics like accuracy, f1-score,
precision, and recall; do not cogitate the standard of difference among labels, which
supports spotting all classes of DR. In our paper used Efficient Net BNs models. We
concluded evaluation scores using the F1-score, which is applicable for grading various
classes of DR established on the intensity levels. We have accomplished the F1- score of
0.88 and 0.84 using the simple preprocess, Gaussian smoothing filters, and deploying an
Efficient Net BNs network on DeepDRiD and EYE-PACS datasets.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607466">Place Hold on <em>Deploying Efficient Net (BNs) for grading Diabetic Retinopathy severity levels from fundus images /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607466</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Bone X-ray abnormality detection using MURA dataset /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607691</link>
        
       <description><![CDATA[









	   <p>By Batool, Sana. 
	   
                        . 40p.
                        , Musculoskeletal abnormalities along with bone fractures are a wide range of
abnormalities that account for most visits of patients to Emergency department of hospitals.
According to an estimate, more than 1.7 billion people are affected by musculoskeletal disorders
each year. Bone X-rays are the first line imaging modality for imaging of fractured bones.
Radiologists then undergo reporting of X-rays for detection of fractures and pathologies.
Classification of bone X-rays into normal and abnormal is a time-taking process and is also
subjected to variability between different radiologists. Therefore, the use of automatic classifiers
incorporating deep learning algorithms is currently in use in clinical diagnostics. MURA is a
large publicly available dataset released by the machine learning group of Stanford university.
MURA dataset consists of 40,895 multi-view images of upper limb that belong to seven regions
namely shoulder, humerus, elbow, forearm, wrist, hand, and fingers. In this study we propose the
use of the single DenseNet-169 model trained on complete dataset along with multiple preprocessing and data augmentation steps, based on Keras in TensorFlow. Training data was
divided into 80:20 for training and validation respectively, whereas, testing of model was done
on validation set. The results obtained through the proposed technique include 80% testing
accuracy. This validates the effectiveness of this method for bone fractures classification.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607691">Place Hold on <em>Bone X-ray abnormality detection using MURA dataset /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607691</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Automated Brain Tumor Segmentation using Multimodal MRI Scans /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608387</link>
        
       <description><![CDATA[









	   <p>By Ehsan ,Fatima . 
	   
                        . 73p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608387">Place Hold on <em>Automated Brain Tumor Segmentation using Multimodal MRI Scans /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608387</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    EYE MOVEMENT BEHAVIOUR VARIES WITH EMOTIONAL CUES /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608462</link>
        
       <description><![CDATA[









	   <p>By KHALID, USMAN . 
	   
                        . 69p. ;
                        
                         33cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608462">Place Hold on <em>EYE MOVEMENT BEHAVIOUR VARIES WITH EMOTIONAL CUES /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608462</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Melanoma Detection Using Machine Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608636</link>
        
       <description><![CDATA[









	   <p>By Zafar ,Kashan . 
	   
                        . 70p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608636">Place Hold on <em>Melanoma Detection Using Machine Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608636</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Multilabel Classification and Localization of Rare Pulmonary Diseases using Deep Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608658</link>
        
       <description><![CDATA[









	   <p>By Khaliq, Fariha . 
	   
                        . 52p.
                        , Chest radiography is the most common radiological examination used for the
diagnosis of thoracic diseases. Currently, automated classification of radiological images
is abundantly used in clinical diagnosis. However, each pathology has its own response
characteristic receptive field regions, which is a key problem during the classification of
chest diseases. In addition to extreme class imbalance, cases labelled as uncertain in the
dataset further complicate this task. To solve this problem, we propose a semi-supervised
learning approach known as U-SelfTrained. In this scheme, uncertain labels are left
unlabeled in the dataset; first, the model is trained on labelled instances and then on
unlabeled instances relabeling them with labels having a higher probability.
Comprehensive experimentation was carried out on the CheXpert dataset, which consists
of 223,816 frontal and lateral view CXR images of 64,740 patients with 14 diseases. The
testing accuracy is 0.877 on the CheXpert dataset, which is currently the highest score
achieved to date. This validates the effectiveness of this method for chest radiography
classification. The practical significance of this study is the implementation of AI
algorithms to assist radiologists in improving their diagnostic accuracy. 
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608658">Place Hold on <em>Multilabel Classification and Localization of Rare Pulmonary Diseases using Deep Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608658</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Time-Frequency Analysis of Fatigue in Lower Limb Muscles Under External Super- Imposed Vibrations /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608787</link>
        
       <description><![CDATA[









	   <p>By  Ashfaque ,Moeez. 
	   
                        . 52P. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608787">Place Hold on <em>Time-Frequency Analysis of Fatigue in Lower Limb Muscles Under External Super- Imposed Vibrations /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608787</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Liver Segmentation from Combined Tomography (CT) Abdominal Images Data Using Deep Neural Networks /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608820</link>
        
       <description><![CDATA[









	   <p>By tuz Zahra Khan ,Fatima . 
	   
                        . 60p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608820">Place Hold on <em>Liver Segmentation from Combined Tomography (CT) Abdominal Images Data Using Deep Neural Networks /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608820</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Multi-Disease Classification For Retinal Diseases Using Deep Learning Technique






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608902</link>
        
       <description><![CDATA[









	   <p>By Khan, Omar Salman . 
	   
                        . 55p.
                        , Diagnosis before the spread of retinal diseases is vital to prevent high level blindness
and any sort of visual impairment. Many retinal diseases can be found via the fundus
imaging which has a very important role in the observation and detection of various
ophthalmic diseases. Most previous literature has focused their approaches on
identifying individual diseases or a combination of 3-4 diseases like DR, MYA,
ARMD, MH, ODC having major research. The eye is mostly affected by more than
one underlying disease or disease marker, and uptil now most datasets had very few
classes. Recently introduced RFMiD dataset, is one of the first datasets to provide 45
different classes of ophthalmic diseases. Hence making it possible to work towards
automated multi-disease classification models which would provide great help to
highlight this issue via clinical decision support systems integrated in the medical
image diagnosis. Our work aimed to achieve higher accuracy than previous literature
and to create an CDS application from the model in understanding and predicting multi
retinal diseases. Deep learning models are excellent and have proven to be extremely
effective in solving complex image processing problems. In addition, ensemble
learning yields high generalization performance by reducing variance. Therefore, a
synthesis of transfer, ensemble, and deep learning was used in this work to create an
accurate and reliable model for multi retinal disease classification. To create the Multi
Retinal Disease Classification Model (MRDCM) we used ensemble of EfficientNetB4
and EfficientNetV2S, with our final ensemble model giving promising results. In our
evaluation, we scored an AUC of 0.973 which stands better than literature. Further our
model selection is lighter than models used in literature. The model was tested on 27
main classes of RFMiD dataset for comparison with literature. Index Terms—Deep
iii
Learning, Ensemble learning, Retinal Image Analysis, multi-Disease classification,
transfer learning. 
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608902">Place Hold on <em>Multi-Disease Classification For Retinal Diseases Using Deep Learning Technique</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608902</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Ocular Disease Intelligent Recognition /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608903</link>
        
       <description><![CDATA[









	   <p>By Sahar, Syeda Ghina . 
	   
                        . 46p.
                        , To record anatomical details of the eye and anomalies, fundus imaging has proved very
efficient. The most effective way to see and diagnose a wide range of eye diseases is through
fundus imaging. Conditions that affect the blood vessels and areas surrounding it include diabetesrelated retinopathy, glaucoma, AMD, myopia, cataract and hypertension. It's possible for the
patient to have more than one ophthalmological problems that can be seen in one or both of
his eyes. The dataset provided by ODIR is used in this study. The data has eight different categories
for the diseases to be detected. By using transfer learning, two simultaneous models are described
for solving the multi label problem for both the eyes (left and right). For the convolutional network,
two synchronous efficient net models are implemented which are used with ADAM optimizers for
better detection and results outcome. On the ODIR data set, B7 Efficient net along with focal loss
outperformed the other approaches with an accuracy rate of 0.96%.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608903">Place Hold on <em>Ocular Disease Intelligent Recognition /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608903</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Kidney and Kidney Tumor Segmentation, 2019 (KiTS-19) /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608904</link>
        
       <description><![CDATA[









	   <p>By Abbasi, Ramsha . 
	   
                        . 58p.
                        , Computed Tomography (CT) is the most widely used imaging procedure for locating
and diagnosing kidney tumors. The standard treatment for kidney tumors is surgical
removal. It is important to accurately segment the kidney and its tumor for effective
surgical planning. The manual segmentation of kidney tumors is time-consuming and
subject to variability between different radiologists. Therefore, automatic semantic
segmentation of kidney tumors using deep learning networks has become increasingly
popular in the past few years. Automatic segmentation of kidney tumors is a very
challenging task due to their morphological heterogenicity. This work provides the
application of 3D UNet and 3D SegResNet on KiTS19 challenge data for accurate
segmentation of kidney and kidney tumors. An ensembling operation was added in the
end to average the predictions of all models. The proposed method is based on the
MONAI framework and focuses more on training procedure rather than complex
architectural modifications. The models were trained using KiTS19 training set of 210
cases for which ground truth labels were available. The training data was divided into
190:20, for training and validation respectively. We evaluated the performance of our
network on KiTS19 official test set and obtained mean dice of 0.8964, 0.9724 kidney
dice, and 0.8204. Our approach outperforms many submissions in terms of kidney
segmentation and gives promising results for tumor segmentation. We also used a local
test set of 90 cases from KiTS21 challenge to check how well our method adopts to a
new dataset. It scored a mean dice of 0.9160, kidney dice of 0.9771, and 0.8550 tumor
dice. The obtained results on KiTS19 official test set and local test set show that our
approach is effective and can be used for organ segmentation. 
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608904">Place Hold on <em>Kidney and Kidney Tumor Segmentation, 2019 (KiTS-19) /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608904</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    3D Neural Network for Detection of ACL Injury in Knee MRI Scans /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608905</link>
        
       <description><![CDATA[









	   <p>By Kamran, Abdullah . 
	   
                        . 43p.
                        , Computer aided diagnosis is widely used in medical imaging for the diagnosis of many
diseases such as cardiomegaly, brain and kidney tumor, lung cancer, COVID-19 and
may more. For the past few decades, computer aided diagnosis has significantly
improved due to the development of better architecture used for the diagnosis. Knee
injury diagnosis using deep learning techniques is highly popular due its high detection
rate and is highly localized. Many state-of-the-art-deep learning models have been
used for the detection of abnormalities, meniscus tear and ACL tears in Knee MRI
scans. These models include RESNET, Google-Net, VGG19 and VGG16, Alex-Net
and many other, all giving significant results. In this study we used a custom 3D CNN
model which is light in weight. For training we are using two datasets, one provided
by Stanford ML group and the other form Hospital in Croatia. We combined the two
dataset and split it into 80-20 ration (80% of the data used for training and remaining
for testing purposes). Both the dataset has extreme class imbalance, so we used data
augmentation and class weights to rectify its effect on the training process. Further the
voxel intensities for the two datasets were different (one dataset was in 8-bit format
and the second was in 12-bit format), so we normalized the intensity values using
mathematical formulas. For contrast, we performed adaptive histogram equalization
Average accuracy and AUC achieved by our model on training set is 97.6 and 99.3
respectively, during 5-fold cross validation.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=608905">Place Hold on <em>3D Neural Network for Detection of ACL Injury in Knee MRI Scans /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=608905</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Facial Micro-Expression Detection under Variable Road Condition for Evaluation of Driver’s Performance /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609035</link>
        
       <description><![CDATA[









	   <p>By Shaikh, Aleema Moin . 
	   
                        . 52p.
                        , The emotional state of the driver has a direct impact on driving. Hence, to ensure road
safety, it is important to monitor driver’s emotional state. This challenging task can be
achieved through analyzing micro-expressions as they are linked with a person’s
emotional state and emerge on face even under situations where a person is trying to
conceal true emotions. Moreover, it’s easy to acquire facial data while driving than any
other stress signal. This research focuses on identifying micro-expressions linked with
stressful emotional state in drivers. The physiological parameters like heart rate and stress
value based on heart rate variability are also monitored as they fluctuate easily under
emotional changes within the body. This research considered the emotions of happiness,
sadness, surprise, anger, fear and disgust. To evaluate stress within drivers, the dominant
emotion behind detected micro-expression is found through an emotion detection open
source code. The results show a high F1 score for the identified micro-expressions i.e. 1.00,
0.947, 0.933 and 0.85. These findings can help in face readings where stress detection is
required and can contribute towards better systems in cars to ensure road safety and manage
stress. 
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609035">Place Hold on <em>Facial Micro-Expression Detection under Variable Road Condition for Evaluation of Driver’s Performance /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609035</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Detection of COVID’19 through low resolution CT scan images using Deep Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609134</link>
        
       <description><![CDATA[









	   <p>By Arif, Hajra . 
	   
                        . 51p
                        , Coronavirus emerged as a deadly disease in 2019 killing almost 6.25 million people, with
its variants still being discovered. For timely medical treatment it is necessary to
accurately and rapidly diagnose the disease. The main test used for diagnosis was “The
Reverse Transcription Polymerase Chain Reaction (RT-PCR)” but due to limited
availability of RT-PCR equipment at the time of outbreak, alternative methods were used
to mitigate the damage. One of them was the computed tomography (CT) scans, a noninvasive imaging approach. Utilizing this CT data, deep learning (DL) models were
developed to expedite the diagnostic procedure. Due to privacy concerns, the original CT
scans were not shared with the public which caused hindrance in the research and
development of accurate DL methods. Hence, datasets were made from secondary
sources, either by extracting images from preprints or saving images in any other format
than DICOM, which generated low resolution images. To address this issue, preprocessing techniques were applied to generate better results of the DL models. The pixel
intensities in images are normalized such that they lie in the range of the actual values of
a CT scan in the Hounsfield unit scale and then given as an input to the model. Diagnostic
performance was assessed by F1-score (84%), AUC (94%) and Accuracy (81%), which
is better than the performance achieved without pre-processing. This study proves that
enhancing the image quality, through pre-processing techniques, can improve the results
when good quality data is unavailable and accurate models can be made for detecting
any disease at the time of the outbreak.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609134">Place Hold on <em>Detection of COVID’19 through low resolution CT scan images using Deep Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609134</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Deep Learning for Improved Myoelectric Control /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=610602</link>
        
       <description><![CDATA[









	   <p>By  Zia ur Rehman, Muhammad . 
	   
                        . 153p.
                        , Advancement in the myoelectric interfaces have increased the use of myoelectric controlled
robotic arms for partial-hand amputees as compared to body-powered arms. Current clinical
approaches based on conventional (on/off and direct) control are limited to few degree of
freedom (DoF) movements which are being better addressed with pattern recognition (PR)
based control schemes. Performance of any PR based scheme heavily relies on optimal features
set. Although, such schemes have shown to be very effective in short-term laboratory
recordings, but they are limited by unsatisfactory robustness to non-stationarities (e.g. changes
in electrode positions and skin-electrode interface). Moreover, electromyographic (EMG)
signals are stochastic in nature and recent studies have shown that their classification
accuracies vary significantly over time. Hence, the key challenge is not the laboratory shortterm conditions but the daily use.
Thus, this work makes use of the longitudinal approaches with deep learning in comparison to
classical machine learning techniques to myoelectric control and explores the real potential of
both surface and intramuscular EMG in classifying different hand movements recorded over
multiple days. To the best of our knowledge, for the first time, it also explores the feasibility
of using raw (bipolar) EMG as input to deep networks. Task are completed with two different
studies that were performed with different datasets.
In the first study, surface and intramuscular EMG data of eleven wrist movements were
recorded concurrently over six channels (each) from ten able-bodied and six amputee subjects
for consecutive seven days. Performance of stacked sparse autoencoders (SSAE), an emerging
deep learning technique, was evaluated in comparison with state of art LDA using offline
xii
classification error as performance matric. Further, performance of surface and intramuscular
EMG was also compared with respect to time. Results of different analyses showed that SSAE
outperformed LDA. Although there was no significant difference found between surface and
intramuscular EMG in within day analysis but surface EMG significantly outperformed
intramuscular EMG in long-term assessment.
In the second study, surface EMG data of seven able-bodied were recorded over eight channels
using Myo armband (wearable EMG sensors). The protocol was set such that each subject
performed seven movements with ten repetitions per session. Data was recorded for
consecutive fifteen days with two sessions per day. Performance of convolutional neural
network (CNN with raw EMG), SSAE (both with raw data and features) and LDA were
evaluated offline using classification error as performance matric. Results of both the short and
long-term analyses showed that CNN and SSAE-f outperformed the others while there was no
difference found between the two.
Overall, this dissertation concludes that deep learning techniques are promising approaches in
improving myoelectric control schemes. SSAE generalizes well with hand-crafted features but
fails to generalize with raw data. CNN based approach is more promising as it achieved optimal
performance without the need to select featur
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=610602">Place Hold on <em>Deep Learning for Improved Myoelectric Control /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=610602</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    DETECTION OF THYROID DISEASES USING MACHINE LEARNING TECHNIQUES /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=610719</link>
        
       <description><![CDATA[









	   <p>By AKHTAR,TEHSEEN . 
	   
                        . 137p.
                        , Background: The unusual growth of the glandular tissue on the boundary of the Thyroid gland
is an indication of Thyroid disease. Thyroid disease is characterised by an unusually high or
low number of hormones produced by the thyroid gland, the two most prevalent kinds are
hypothyroidism (underactive thyroid gland) and hyperthyroidism (overactive thyroid gland).
The main aim of this project was to introduce the concept of an efficient multi-stage ensemble
i.e., the voting ensemble of the homogeneous ensemble which could be used with a variety of
feature-selection algorithms for improving the diagnosis of thyroid diseases. The dataset
utilised in this study was built from real-time thyroid data obtained from the teaching hospital
in DG Khan at District Head Quarter (DHQ), Pakistan. Following the appropriate preprocessing processes, three kinds of attribute-selection strategies were used: The first approach
used was Select from Model (SFM), the second technique was the Select K-Best (SKB), and
the final methodology was the Recursive Feature Elimination (RFE). Select From Model
(SFM) is a form of attribute-selection strategy that uses a model to select attributes. As potential
feature estimators, the Decision Tree (DT), Logistic Regression (LR), Gradient Boosting (GB)
and Random Forest denoted as the (RF) classifiers were employed in conjunction with each
other. The homogeneous ensemble activated the bagging, boosting-based learners, who were
then classified by the Voting ensemble, which employed both soft and hard voting to categorise
the data. Other performance assessment criteria such as hamming loss, accuracy, mean square
error, sensitivity and others have been implemented. The results of the experiments reveal that
when the suggested approach for better thyroid sickness detection is applied in its most
practicable form, it is most successful. On the dataset 1, all of the algorithms tested obtained
100 % accuracy with subset of the total no of feature in each case, however on the dataset 2,
more than 98 percent accuracy was reached in every case. On the basis of accuracy and
computing cost, the results given here exceeded equivalent benchmark models in their
respective fields of study.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=610719">Place Hold on <em>DETECTION OF THYROID DISEASES USING MACHINE LEARNING TECHNIQUES /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=610719</guid>
     </item>
	 
   </channel>
</rss>





