<?xml version='1.0' encoding='utf-8' ?>



<rss version="2.0"
      xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:atom="http://www.w3.org/2005/Atom">
   <channel>
     <title><![CDATA[NUST Institutions Library Catalogue Search for 'kw,wrdl: (su-rl:&quot;Intelligent control systems.&quot;)']]></title>
     <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?idx=kw&amp;q=%28su-rl%3A%22Intelligent%20control%20systems.%22%29&amp;format=rss</link>
     <atom:link rel="self" type="application/rss+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?idx=kw&amp;q=%28su-rl%3A%22Intelligent%20control%20systems.%22%29&amp;sort_by=relevance_dsc&amp;format=atom"/>
     <description><![CDATA[ Search results for 'kw,wrdl: (su-rl:&quot;Intelligent control systems.&quot;)' at NUST Institutions Library Catalogue]]></description>
     <opensearch:totalResults>4</opensearch:totalResults>
     <opensearch:startIndex>0</opensearch:startIndex>
     
       <opensearch:itemsPerPage>50</opensearch:itemsPerPage>
     
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Traffic Signal Control using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607296</link>
        
       <description><![CDATA[









	   <p>By Umer Jamil, Qazi . 
	   
                        . 90p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607296">Place Hold on <em>Traffic Signal Control using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607296</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    RL based Differential Drive Primitive Policy for Transfer Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609005</link>
        
       <description><![CDATA[









	   <p>By Shahid, Mahrukh . 
	   
                        . 52p.
                        , To ensure the steady navigation for robot stable controls are the basic unit and control values
selection is highly environment dependent. Adding Generalization to system is the key to
reusability of control parameters to ensure adaptability in robots to perform with sophistication, in
the environments about which they have no prior knowledge, for this Reinforcement Leaning (RL)
based control systems are promising. However, tuning appropriate parameters to train RL
algorithm is a challenge. Therefore, we designed a continuous reward function to minimizing the
sparsity and stabilizes the policy convergence, to attain control generalization for differential drive
robot. We Implemented Twin Delayed Deep Deterministic Policy Gradient-TD3 on Open-AI Gym
Race Car. System was trained to achieve smart primitive control policy, moving forward in the
direction of goal by maintaining an appropriate distance from walls to avoid collisions. Resulting
policy was tested on unseen environments and observed precisely performing results. Upon
comparative analysis of TD3 with DDPG, TD3 policy outperformed the DDPG policy in both
training and testing phase, proving TD3 to be resource efficient and stable. 
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609005">Place Hold on <em>RL based Differential Drive Primitive Policy for Transfer Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609005</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Enhanced Drone Control Using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609185</link>
        
       <description><![CDATA[









	   <p>By Moin, Hassan . 
	   
                        . 101p.
                        , Quadcopters have already proven their effectiveness in both civilian and military applications. Their control, however, is a difficult task due to their under-actuated, highly
nonlinear, and coupled dynamics. Most quadcopter autopilot systems utilize cascaded
control schemes, where the outer loop handles mission-level objectives in 3D Euclidean
space, and the inner loop is responsible for stability and control. Such complex systems
are generally operated using PID controllers, which have demonstrated exceptional performance in multiple scenarios, such as obstacle avoidance, trajectory tracking and path
planning. However, tuning their gains for nonlinear systems using heuristics or rulebased methods is a tedious, time-consuming and difficult task. Rapid advances in the
field of computational engineering, on the other hand, have paved way for intelligent
flight control systems, which have become an important area of study addressing the
limits of PID control, most recently through the application of reinforcement learning
(RL). In this dissertation, an optimal gain auto-tuning strategy is implemented for altitude, attitude, and position controllers of a 6 DoF nonlinear drone system using a deep
actor-critic RL algorithm having continuous observation and action spaces. The state
equations are derived using Lagrange’s (energy-based) method, while the drone’s aerodynamic coefficients are estimated numerically using blade element momentum theory.
Furthermore, the cascaded closed loop system’s asymptotic stability is studied using the
theory of Lyapunov. Finally, the proposed strategy is validated by simulation results,
where the gains learned by RL agents allow the quadcopter to track a given trajectory
accurately. Moreover, these optimal gains satisfy the conditions obtained through Lyapunov’s stability analysis, indicating that the RL algorithm is an extremely powerful
tool which can assess uncertainties existing within any complex nonlinear system
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609185">Place Hold on <em>Enhanced Drone Control Using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609185</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Control of Flywheel Inverted Pendulum Using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614608</link>
        
       <description><![CDATA[









	   <p>By Ahmad, Shakeel . 
	   
                        . 67p.
                        , Balancing an inverted pendulum is a classic control problem that traditionally requires precise system modeling for effective controller design. Reinforcement Learning (RL) offers a model-free alternative but requires extensive training, which is impractical and risky when performed directly on physical hardware. Existing methods
typically rely on simulation environments built on accurate models, which are often
difficult to obtain. In this work, we use RL to balance flywheel inverted pendulum
by constructing an approximate model of the system through parameter estimation.
Despite its inaccuracies, the model proved sufficient for training RL agents in simulation. We developed a simulation environment based on the estimated model and
trained agents using Deep Q-Network (DQN), Proximal Policy Optimization (PPO),
and Discrete Soft Actor-Critic (SAC) algorithms. The trained policies were deployed
on real hardware without any additional fine-tuning. All agents achieved successful swing-up and stabilization, with SAC achieving the fastest swing-up time (1.65
s) and lowest steady-state error (0.0220 rad), demonstrating that RL can tolerate
model imperfections and still perform effectively on real systems.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=614608">Place Hold on <em>Control of Flywheel Inverted Pendulum Using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614608</guid>
     </item>
	 
   </channel>
</rss>





