02687nam a22001697a 4500082000800000100002100008245009500029264003800124300002600162500206200188650003602250700004802286856005702334942001302391999001902404952009402423 a610 aHussain, Fatima  aHuman Action Recognition Using Computer Vision: A Deep Learning Approach /cFatima Hussain aIslamabad : bSMME- NUST; c2024. a90p.bSoft Copyc30cm aHuman action recognition (HAR) is always an enthralling topic because it facilitates the identification of activity from the video's sequence. Applications for Human action recognition is numerous including surveillance, sport analysis, suspicious activity recognition and healthcare. Human activity recognition is hampered by poor resolution cameras, extreme weather, and similar colors for both the subject and the object, as well as by intraclass human activity such as walking and jogging. Currently available approaches i.e. transformer based models, expanded datasets and improved temporal modelling techniques such as attention mechanisms and LSTMs remove the background noise from the final layers but the accuracy of correctly identifying actions is reduced and address intraclass resemblance in human action classification to some extent. These advancements improve the capabilities of action recognition systems but completely resolving intraclass resemblance is a challenging task. Therefore, there is a growing need for improved computer vision-based surveillance systems. A hybrid approach called "Human Action Recognition using Deep Learning and Hybrid Evolutionary Techniques" is proposed to address these issues. It consists of following main steps: preprocessing i.e. contrast enhancement, data augmentation, customized models based on residual block architecture, training Residual Block2 and Residual Block3 models, feature extraction and testing, features fusion, feature selection using Binary Chimp optimization and classification. To enhance interpretability, transparency and trust in machine learning models, Grad-CAM and LIME are applied. Both these techniques provide visual display of important regions in imaging. Grad-CAM gave heatmaps and LIME produced highlighted regions on original images. Our suggested methodology achieves state-ofthe-art accuracy on the UT Interaction dataset of Action Recognition with 94% Accuracy. This emphasizes how well the proposed technique works to improve the classification of human actions. aMS Biomedical Sciences (BMS)  aSupervisor : Prof. Dr. Javaid Iqbal9119677 uhttp://10.250.8.41:8080/xmlui/handle/123456789/45274 2ddccTHE c610816d610816 00104070aSMMEbSMMEcEBd2024-08-08l0o610pSMME-TH-1040r2024-08-08w2024-08-08yTHE