| 000 -LEADER |
| fixed length control field |
02537nam a22001577a 4500 |
| 082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER |
| Classification number |
629.8 |
| 100 ## - MAIN ENTRY--PERSONAL NAME |
| Personal name |
Moin, Hassan |
| 245 ## - TITLE STATEMENT |
| Title |
Enhanced Drone Control Using Reinforcement Learning / |
| Statement of responsibility, etc. |
Hassan Moin |
| 264 ## - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE |
| Place of production, publication, distribution, manufacture |
Islamabad : |
| Name of producer, publisher, distributor, manufacturer |
SMME- NUST; |
| Date of production, publication, distribution, manufacture, or copyright notice |
2022. |
| 300 ## - PHYSICAL DESCRIPTION |
| Extent |
101p. |
| Other physical details |
Soft Copy |
| Dimensions |
30cm |
| 500 ## - GENERAL NOTE |
| General note |
Quadcopters have already proven their effectiveness in both civilian and military applications. Their control, however, is a difficult task due to their under-actuated, highly<br/>nonlinear, and coupled dynamics. Most quadcopter autopilot systems utilize cascaded<br/>control schemes, where the outer loop handles mission-level objectives in 3D Euclidean<br/>space, and the inner loop is responsible for stability and control. Such complex systems<br/>are generally operated using PID controllers, which have demonstrated exceptional performance in multiple scenarios, such as obstacle avoidance, trajectory tracking and path<br/>planning. However, tuning their gains for nonlinear systems using heuristics or rulebased methods is a tedious, time-consuming and difficult task. Rapid advances in the<br/>field of computational engineering, on the other hand, have paved way for intelligent<br/>flight control systems, which have become an important area of study addressing the<br/>limits of PID control, most recently through the application of reinforcement learning<br/>(RL). In this dissertation, an optimal gain auto-tuning strategy is implemented for altitude, attitude, and position controllers of a 6 DoF nonlinear drone system using a deep<br/>actor-critic RL algorithm having continuous observation and action spaces. The state<br/>equations are derived using Lagrange’s (energy-based) method, while the drone’s aerodynamic coefficients are estimated numerically using blade element momentum theory.<br/>Furthermore, the cascaded closed loop system’s asymptotic stability is studied using the<br/>theory of Lyapunov. Finally, the proposed strategy is validated by simulation results,<br/>where the gains learned by RL agents allow the quadcopter to track a given trajectory<br/>accurately. Moreover, these optimal gains satisfy the conditions obtained through Lyapunov’s stability analysis, indicating that the RL algorithm is an extremely powerful<br/>tool which can assess uncertainties existing within any complex nonlinear system |
| 650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM |
| Topical term or geographic name entry element |
MS Robotics and Intelligent Machine Engineering |
| 700 ## - ADDED ENTRY--PERSONAL NAME |
| Personal name |
Supervisor : Dr. Muhammad jawad khan |
| 856 ## - ELECTRONIC LOCATION AND ACCESS |
| Uniform Resource Identifier |
<a href="http://10.250.8.41:8080/xmlui/handle/123456789/29934">http://10.250.8.41:8080/xmlui/handle/123456789/29934</a> |
| 942 ## - ADDED ENTRY ELEMENTS (KOHA) |
| Source of classification or shelving scheme |
|
| Koha item type |
Thesis |