<?xml version='1.0' encoding='utf-8' ?>



<rss version="2.0"
      xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:atom="http://www.w3.org/2005/Atom">
   <channel>
     <title><![CDATA[NUST Institutions Library Catalogue Search for 'kw,wrdl: (su-rl:&quot;Machine Learning&quot;)']]></title>
     <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?idx=kw&amp;q=%28su-rl%3A%22Machine%20Learning%22%29&amp;format=rss</link>
     <atom:link rel="self" type="application/rss+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?idx=kw&amp;q=%28su-rl%3A%22Machine%20Learning%22%29&amp;sort_by=relevance_dsc&amp;format=atom"/>
     <description><![CDATA[ Search results for 'kw,wrdl: (su-rl:&quot;Machine Learning&quot;)' at NUST Institutions Library Catalogue]]></description>
     <opensearch:totalResults>16</opensearch:totalResults>
     <opensearch:startIndex>0</opensearch:startIndex>
     
       <opensearch:itemsPerPage>50</opensearch:itemsPerPage>
     
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Deep learning /






</title>
       <dc:identifier>ISBN:9780262035613 (hardcover : alk. paper) | 0262035618 (hardcover : alk. paper)</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=329664</link>
        
       <description><![CDATA[









	   <p>By Goodfellow, Ian,. 
	   
                        . xxii, 775 pages :
                        
                         24 cm.. 
                         9780262035613 (hardcover : alk. paper) | 0262035618 (hardcover : alk. paper)
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=329664">Place Hold on <em>Deep learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=329664</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Fundamentals of logic design /






</title>
       <dc:identifier>ISBN:9781133628477 | 1133628478</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=526972</link>
        
       <description><![CDATA[









	   <p>By Roth, Charles H.. 
	   
                        . xxiii, 791 pages :
                        
                         24 cm +. 
                         9781133628477 | 1133628478
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=526972">Place Hold on <em>Fundamentals of logic design /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=526972</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Machine learning for dummies /






</title>
       <dc:identifier>ISBN:9781119245513 (pbk.) | 1119245516</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=534812</link>
        
       <description><![CDATA[









	   <p>By Mueller, John,. 
	   
                        . xii, 410 pages :
                        , &quot;Learning made easy&quot;--Cover. | Includes index.
                         24 cm.. 
                         9781119245513 (pbk.) | 1119245516
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=534812">Place Hold on <em>Machine learning for dummies /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=534812</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Probabilistic machine learning for civil engineers /






</title>
       <dc:identifier>ISBN:9780262538701</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=588648</link>
        
       <description><![CDATA[









	   <p>By Goulet, James-A.,. 
	   
                        . xxviii, 269 pages :
                        
                         26 cm. 
                         9780262538701
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=588648">Place Hold on <em>Probabilistic machine learning for civil engineers /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=588648</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Artificial intelligence safety and security /






</title>
       <dc:identifier>ISBN:9780815369820 (paperback : acidfree paper) | 9781138320840 (hardback : acidfree paper)</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=591876</link>
        
       <description><![CDATA[









	   <p>
	   
                        . xxix, 443 pages :
                        
                         25 cm.. 
                         9780815369820 (paperback : acidfree paper) | 9781138320840 (hardback : acidfree paper)
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=591876">Place Hold on <em>Artificial intelligence safety and security /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=591876</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Data-driven approaches for health care :


    machine learning for identifying high utilizers /





</title>
       <dc:identifier>ISBN:9780367342906 | 0367342901</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=594800</link>
        
       <description><![CDATA[









	   <p>By Yang, Chengliang. 
	   
                        . ix, 107 pages :
                        , &quot;A Chapman &amp; Hall book.&quot; | Introduction. Overview of Healthcare Data. Machine Learning Modeling from Healthcare Data. Machine Learning Modeling from Healthcare Data. Descriptive Analysis of High Utilizers. Residuals Analysis for Identifying High Utilizers. Machine Learning Results for High Utilizers.
                         26 cm. 
                         9780367342906 | 0367342901
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=594800">Place Hold on <em>Data-driven approaches for health care :</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=594800</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Traffic Signal Control using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607296</link>
        
       <description><![CDATA[









	   <p>By Umer Jamil, Qazi . 
	   
                        . 90p. ;
                        
                         30cm.. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607296">Place Hold on <em>Traffic Signal Control using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607296</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    An Unreal Engine Based Human Robot Interaction Framework /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607826</link>
        
       <description><![CDATA[









	   <p>By Asif, Muhammad Hassaan . 
	   
                        . 64p.
                        , Existing frameworks lack the support for testing Human Robot Interaction (HRI) research
which in turn often has to be tested practically making it time consuming and expensive. To
overcome this issue, a HRI framework based on Unreal Engine is proposed which consists of a
virtual Nao or Pepper robot along with virtual humans with verbal and non-verbal behaviours in
an environment. Machine Learning (ML) algorithms along with the behaviour of the virtual
robot in response to the interaction with the virtual humans and the environment can be
programmed using a Python API which communicates with Unreal Engine C++ in real time.
Several experiments related to multiple aspects of HRI: (1) Verbal Interaction; (2) Non-Verbal
Interaction; (3) Emotional Interaction were conducted in both the virtual and real world
environments and the results were compared to validate the feasibility of the framework. A
Reinforcement Learning (RL) algorithm was also tested to further indicate the usefulness of the
framework. Through the use of a Virtual Reality (VR) headset, a human can be immersed in the
framework to interact with the robot in real time
                         30cm,. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=607826">Place Hold on <em>An Unreal Engine Based Human Robot Interaction Framework /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=607826</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    RL based Differential Drive Primitive Policy for Transfer Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609005</link>
        
       <description><![CDATA[









	   <p>By Shahid, Mahrukh . 
	   
                        . 52p.
                        , To ensure the steady navigation for robot stable controls are the basic unit and control values
selection is highly environment dependent. Adding Generalization to system is the key to
reusability of control parameters to ensure adaptability in robots to perform with sophistication, in
the environments about which they have no prior knowledge, for this Reinforcement Leaning (RL)
based control systems are promising. However, tuning appropriate parameters to train RL
algorithm is a challenge. Therefore, we designed a continuous reward function to minimizing the
sparsity and stabilizes the policy convergence, to attain control generalization for differential drive
robot. We Implemented Twin Delayed Deep Deterministic Policy Gradient-TD3 on Open-AI Gym
Race Car. System was trained to achieve smart primitive control policy, moving forward in the
direction of goal by maintaining an appropriate distance from walls to avoid collisions. Resulting
policy was tested on unseen environments and observed precisely performing results. Upon
comparative analysis of TD3 with DDPG, TD3 policy outperformed the DDPG policy in both
training and testing phase, proving TD3 to be resource efficient and stable. 
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609005">Place Hold on <em>RL based Differential Drive Primitive Policy for Transfer Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609005</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Enhanced Drone Control Using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609185</link>
        
       <description><![CDATA[









	   <p>By Moin, Hassan . 
	   
                        . 101p.
                        , Quadcopters have already proven their effectiveness in both civilian and military applications. Their control, however, is a difficult task due to their under-actuated, highly
nonlinear, and coupled dynamics. Most quadcopter autopilot systems utilize cascaded
control schemes, where the outer loop handles mission-level objectives in 3D Euclidean
space, and the inner loop is responsible for stability and control. Such complex systems
are generally operated using PID controllers, which have demonstrated exceptional performance in multiple scenarios, such as obstacle avoidance, trajectory tracking and path
planning. However, tuning their gains for nonlinear systems using heuristics or rulebased methods is a tedious, time-consuming and difficult task. Rapid advances in the
field of computational engineering, on the other hand, have paved way for intelligent
flight control systems, which have become an important area of study addressing the
limits of PID control, most recently through the application of reinforcement learning
(RL). In this dissertation, an optimal gain auto-tuning strategy is implemented for altitude, attitude, and position controllers of a 6 DoF nonlinear drone system using a deep
actor-critic RL algorithm having continuous observation and action spaces. The state
equations are derived using Lagrange’s (energy-based) method, while the drone’s aerodynamic coefficients are estimated numerically using blade element momentum theory.
Furthermore, the cascaded closed loop system’s asymptotic stability is studied using the
theory of Lyapunov. Finally, the proposed strategy is validated by simulation results,
where the gains learned by RL agents allow the quadcopter to track a given trajectory
accurately. Moreover, these optimal gains satisfy the conditions obtained through Lyapunov’s stability analysis, indicating that the RL algorithm is an extremely powerful
tool which can assess uncertainties existing within any complex nonlinear system
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609185">Place Hold on <em>Enhanced Drone Control Using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609185</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Reinforcement learning :


    an introduction 





</title>
       <dc:identifier>ISBN:9780262039246 (hardcover : alk. paper)</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609712</link>
        
       <description><![CDATA[









	   <p>By Sutton, Richard S.,. 
	   
                        . xxii, 526 pages :
                        
                         24 cm.. 
                         9780262039246 (hardcover : alk. paper)
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609712">Place Hold on <em>Reinforcement learning :</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609712</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Python data science handbook :


    essential tools for working with data /





</title>
       <dc:identifier>ISBN:9781098121228 | 1098121228</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609722</link>
        
       <description><![CDATA[









	   <p>By Vanderplas, Jacob T.,. 
	   Sebastopol, CA : O'Reilly Media, 2022
                        . xxiv, 563 pages :
                        , Previous edition: 2016.
                         24 cm. 
                         9781098121228 | 1098121228
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609722">Place Hold on <em>Python data science handbook :</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609722</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Dive into deep learning /






</title>
       <dc:identifier>ISBN:9781009389433</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609727</link>
        
       <description><![CDATA[









	   <p>By Zhang, Aston,. 
	   New York, NY : Cambridge University Press, 2023
                        . 548p
                        
                        
                         9781009389433
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=609727">Place Hold on <em>Dive into deep learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=609727</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Gait Generation for a Quadrupedal Robot /


    Zainullah Khan





</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=612422</link>
        
       <description><![CDATA[









	   <p>By Khan, Zainullah . 
	   
                        . 89p.
                        , Quadrupedal robots have gained significant research interest due to their ability to
achieve agile and stable locomotion over complex terrains. Such locomotion can be
achieved by combining various gaits, however, simply changing robot gaits does not
guarantee robust and stable behavior. To ensure stable robot locomotion, gaits must
be seamlessly blended. Current methods of gait transition include model-based, mainly
Model Predictive Control (MPC), approaches, which are limited by the use of handengineered gaits; Reinforcement Learning (RL)-based methods, which address these
limitations but require extensive training; and hybrid methods that combine multiple
controllers but still experience abrupt gait timing changes. This thesis introduces a
novel RL-MPC hybrid control framework that addresses the controllers’ shortcomings
in the current literature. The proposed controller incorporates a feature extractor module that extracts features from the robot terrain and state. The novel framework also
introduces a gait timing correction step to smooth out gait transitions. The proposed
framework was tested on a randomly generated rough terrain, where the robot efficiently traversed and transitioned between gaits while maintaining accurate command
velocity. Testing the effectiveness of the contact timing correction step revealed that the
locomotion produced by the controller without contact timing correction was jerky and
unstable on rough terrain. The proposed framework also outperforms a state-of-the-art
method in gait transitioning, resulting in smoother and more stable locomotion.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=612422">Place Hold on <em>Gait Generation for a Quadrupedal Robot /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=612422</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Control of Flywheel Inverted Pendulum Using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614608</link>
        
       <description><![CDATA[









	   <p>By Ahmad, Shakeel . 
	   
                        . 67p.
                        , Balancing an inverted pendulum is a classic control problem that traditionally requires precise system modeling for effective controller design. Reinforcement Learning (RL) offers a model-free alternative but requires extensive training, which is impractical and risky when performed directly on physical hardware. Existing methods
typically rely on simulation environments built on accurate models, which are often
difficult to obtain. In this work, we use RL to balance flywheel inverted pendulum
by constructing an approximate model of the system through parameter estimation.
Despite its inaccuracies, the model proved sufficient for training RL agents in simulation. We developed a simulation environment based on the estimated model and
trained agents using Deep Q-Network (DQN), Proximal Policy Optimization (PPO),
and Discrete Soft Actor-Critic (SAC) algorithms. The trained policies were deployed
on real hardware without any additional fine-tuning. All agents achieved successful swing-up and stabilization, with SAC achieving the fastest swing-up time (1.65
s) and lowest steady-state error (0.0220 rad), demonstrating that RL can tolerate
model imperfections and still perform effectively on real systems.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=614608">Place Hold on <em>Control of Flywheel Inverted Pendulum Using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614608</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Addressing the Problem of Sloshing in a Liquid Carrying Mobile Robot through Artificial Intelligence /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614843</link>
        
       <description><![CDATA[









	   <p>By Khan, Khubaib Haider . 
	   
                        . 90p.
                        , Liquid sloshing presents a critical challenge for mobile robots tasked with transporting
partially filled containers, as it destabilizes motion, reduces accuracy, and increases the
risk of spillage. Traditional passive and active control methods—such as baffles, PID, and
LQR—are limited in adaptability and computational efficiency when faced with nonlinear,
time-varying fluid–structure interactions. This thesis addresses these challenges by
formulating slosh suppression as a reinforcement learning problem, leveraging the Twin
Delayed Deep Deterministic Policy Gradient (TD3) algorithm integrated with a surrogate
ball-in-container analogy in the Webots simulator. The surrogate transforms complex
liquid dynamics into a tractable cart-pole inspired system, enabling efficient training of
robust control policies without reliance on high-fidelity but computationally expensive
CFD models.
A custom simulation–learning pipeline was developed, coupling Webots with a PyTorchbased TD3 agent via JSON-based communication. State and action spaces were defined
using the ball’s displacement ratio and robot velocity, while a multi-component reward
function balanced slosh minimization, forward progress, and smooth energy-efficient
motion. Training results demonstrated that the TD3 agent learned to achieve stable, sloshfree navigation over a 10 m trajectory, outperforming traditional controllers and earlier RL
variants in stability and adaptability.
Results shows that the Model Learns well after 600 Episodes attaining optimal velocity of
1.25 – 0.5 m/s keeping ball within 2 mm limit, maximizing the reward to around 1180 at
1700 Episodes, eventually reducing sloshing by 65 % during Training phase. While, during
Test phase robot is tested for 0 to 4 m/s velocity with control and no control scenarios,
showing the results RL algorithm suppress the ball displacement (liquid slosh) by
approximately 45 %. The study validates reinforcement learning as a viable paradigm for
real-time liquid slosh suppression in robotics, offering superior robustness and scalability.
Contributions include an AI-driven control framework for slosh suppression, integrationxvii
of surrogate modeling with DRL for real-time feasibility, and a reward design framework
encoding domain-specific stability objectives. For Further Future working it is
recommended to include extending to real liquid models and hardware validation, adopting
symmetric action spaces and advanced RL methods, and integrating slosh control with
autonomous navigation in dynamic environments.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=614843">Place Hold on <em>Addressing the Problem of Sloshing in a Liquid Carrying Mobile Robot through Artificial Intelligence /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614843</guid>
     </item>
	 
   </channel>
</rss>





