<?xml version="1.0" encoding="UTF-8"?>
<record
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"
    xmlns="http://www.loc.gov/MARC21/slim">

  <leader>02310nam a22001457a 4500</leader>
  <datafield tag="082" ind1=" " ind2=" ">
    <subfield code="a">629.8</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="a">Iqbal, Anwar </subfield>
    <subfield code="9">131844</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Disease Detection in Cotton field using Deep Learning through UAV Imagery /</subfield>
    <subfield code="c">Anwar Iqbal</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2=" ">
    <subfield code="a">Islamabad : </subfield>
    <subfield code="b">SMME- NUST; </subfield>
    <subfield code="c">2025.</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
    <subfield code="a">99p.</subfield>
    <subfield code="b">Soft Copy</subfield>
    <subfield code="c">30cm</subfield>
  </datafield>
  <datafield tag="500" ind1=" " ind2=" ">
    <subfield code="a">Cotton sustains the textile economies of 80+ countries with annual yield of 113 million
bales, yet 30&#x2013;60 % yield losses are still attributed to viral, bacterial and fungal diseases
that remain undetected until manual field scouting. To address this critical issue, the
use of Unmanned-aerial-vehicle (UAV) imagery combined with deep learning offers a
scalable early-warning system, but existing models suffer from low early-stage
accuracy, poor transferability and high computational cost. In this paper, we propose
Cotton-Vision, a fully convolutional single-stage detector that synergizes three lightweight architectural innovations. The architectural changes include ghost bottleneck
modules for 20 % parameter reduction, efficient channel-attention (ECA) for adaptive
feature recalibration, and cross-stage partial (CSP) fusion for enriched gradient flow.
The network is trained on local datasets of cotton collected from 7 different fields and
comprised 2700+ drone images from local cotton farms in Punjab Province of Pakistan.
Extensive augmentation (rotation, shear, exposure, saturation) yields 11704 training
samples. On a held-out test set, Cotton-vision achieved a precision of 97.7%, a recall
of 77.7%, a mAP@50 of 85.6%, and a mAP@50-95 of 76.6%. These results represent
10-20% improvement in precision, recall and mAP over the leading models, including
YOLOv5, YOLOv8, YOLOv11, YOLOv12, RT-DETR and RF-DTR. Through a
comprehensive ablation study, we demonstrate that each of our proposed architectural
enhancements contributes to the model's superior performance with 12%-16%
improvement from the base model. Our findings confirm that this synergy of
lightweight and attention-based modules provides a robust and computationally viable
solution for early cotton disease detection.</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="a">MS Robotics and Intelligent Machine Engineering</subfield>
    <subfield code="9">131845</subfield>
  </datafield>
  <datafield tag="856" ind1=" " ind2=" ">
    <subfield code="u">http://10.250.8.41:8080/xmlui/handle/123456789/56219</subfield>
  </datafield>
  <datafield tag="942" ind1=" " ind2=" ">
    <subfield code="2">ddc</subfield>
    <subfield code="c">THE</subfield>
  </datafield>
  <datafield tag="999" ind1=" " ind2=" ">
    <subfield code="c">615333</subfield>
    <subfield code="d">615333</subfield>
  </datafield>
  <datafield tag="952" ind1=" " ind2=" ">
    <subfield code="0">0</subfield>
    <subfield code="1">0</subfield>
    <subfield code="4">0</subfield>
    <subfield code="7">0</subfield>
    <subfield code="a">SMME</subfield>
    <subfield code="b">SMME</subfield>
    <subfield code="c">EB</subfield>
    <subfield code="d">2025-11-12</subfield>
    <subfield code="l">0</subfield>
    <subfield code="o">629.8</subfield>
    <subfield code="p">SMME-TH-1194</subfield>
    <subfield code="r">2025-11-12</subfield>
    <subfield code="w">2025-11-12</subfield>
    <subfield code="y">THE</subfield>
  </datafield>
</record>
