<?xml version='1.0' encoding='utf-8' ?>



<rss version="2.0"
      xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"
      xmlns:dc="http://purl.org/dc/elements/1.1/"
      xmlns:atom="http://www.w3.org/2005/Atom">
   <channel>
     <title><![CDATA[NUST Institutions Library Catalogue Search for 'kw,wrdl: (su-rl:&quot;Simulation and Modeling.&quot;)']]></title>
     <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?idx=kw&amp;q=%28su-rl%3A%22Simulation%20and%20Modeling.%22%29&amp;format=rss</link>
     <atom:link rel="self" type="application/rss+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?idx=kw&amp;q=%28su-rl%3A%22Simulation%20and%20Modeling.%22%29&amp;sort_by=relevance_dsc&amp;format=atom"/>
     <description><![CDATA[ Search results for 'kw,wrdl: (su-rl:&quot;Simulation and Modeling.&quot;)' at NUST Institutions Library Catalogue]]></description>
     <opensearch:totalResults>4</opensearch:totalResults>
     <opensearch:startIndex>0</opensearch:startIndex>
     
       <opensearch:itemsPerPage>50</opensearch:itemsPerPage>
     
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Fundamentals of logic design /






</title>
       <dc:identifier>ISBN:9781133628477 | 1133628478</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=526972</link>
        
       <description><![CDATA[









	   <p>By Roth, Charles H.. 
	   
                        . xxiii, 791 pages :
                        
                         24 cm +. 
                         9781133628477 | 1133628478
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=526972">Place Hold on <em>Fundamentals of logic design /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=526972</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    The finite element method for solid and structural mechanics /






</title>
       <dc:identifier>ISBN:9781856176347 (hbk.) | 1856176347 (hbk.)</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=594487</link>
        
       <description><![CDATA[









	   <p>By Zienkiewicz, O. C.,. 
	   
                        . xxxi, 624 pages :
                        
                         25 cm.. 
                         9781856176347 (hbk.) | 1856176347 (hbk.)
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=594487">Place Hold on <em>The finite element method for solid and structural mechanics /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=594487</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Control of Flywheel Inverted Pendulum Using Reinforcement Learning /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614608</link>
        
       <description><![CDATA[









	   <p>By Ahmad, Shakeel . 
	   
                        . 67p.
                        , Balancing an inverted pendulum is a classic control problem that traditionally requires precise system modeling for effective controller design. Reinforcement Learning (RL) offers a model-free alternative but requires extensive training, which is impractical and risky when performed directly on physical hardware. Existing methods
typically rely on simulation environments built on accurate models, which are often
difficult to obtain. In this work, we use RL to balance flywheel inverted pendulum
by constructing an approximate model of the system through parameter estimation.
Despite its inaccuracies, the model proved sufficient for training RL agents in simulation. We developed a simulation environment based on the estimated model and
trained agents using Deep Q-Network (DQN), Proximal Policy Optimization (PPO),
and Discrete Soft Actor-Critic (SAC) algorithms. The trained policies were deployed
on real hardware without any additional fine-tuning. All agents achieved successful swing-up and stabilization, with SAC achieving the fastest swing-up time (1.65
s) and lowest steady-state error (0.0220 rad), demonstrating that RL can tolerate
model imperfections and still perform effectively on real systems.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=614608">Place Hold on <em>Control of Flywheel Inverted Pendulum Using Reinforcement Learning /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614608</guid>
     </item>
	 
     <atom:link rel="search" type="application/opensearchdescription+xml" href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-search.pl?&amp;sort_by=&amp;format=opensearchdescription"/>
     <opensearch:Query role="request" searchTerms="" startPage="" />
     <item>
       <title>
    Addressing the Problem of Sloshing in a Liquid Carrying Mobile Robot through Artificial Intelligence /






</title>
       <dc:identifier>ISBN:</dc:identifier>
        
        <link>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614843</link>
        
       <description><![CDATA[









	   <p>By Khan, Khubaib Haider . 
	   
                        . 90p.
                        , Liquid sloshing presents a critical challenge for mobile robots tasked with transporting
partially filled containers, as it destabilizes motion, reduces accuracy, and increases the
risk of spillage. Traditional passive and active control methods—such as baffles, PID, and
LQR—are limited in adaptability and computational efficiency when faced with nonlinear,
time-varying fluid–structure interactions. This thesis addresses these challenges by
formulating slosh suppression as a reinforcement learning problem, leveraging the Twin
Delayed Deep Deterministic Policy Gradient (TD3) algorithm integrated with a surrogate
ball-in-container analogy in the Webots simulator. The surrogate transforms complex
liquid dynamics into a tractable cart-pole inspired system, enabling efficient training of
robust control policies without reliance on high-fidelity but computationally expensive
CFD models.
A custom simulation–learning pipeline was developed, coupling Webots with a PyTorchbased TD3 agent via JSON-based communication. State and action spaces were defined
using the ball’s displacement ratio and robot velocity, while a multi-component reward
function balanced slosh minimization, forward progress, and smooth energy-efficient
motion. Training results demonstrated that the TD3 agent learned to achieve stable, sloshfree navigation over a 10 m trajectory, outperforming traditional controllers and earlier RL
variants in stability and adaptability.
Results shows that the Model Learns well after 600 Episodes attaining optimal velocity of
1.25 – 0.5 m/s keeping ball within 2 mm limit, maximizing the reward to around 1180 at
1700 Episodes, eventually reducing sloshing by 65 % during Training phase. While, during
Test phase robot is tested for 0 to 4 m/s velocity with control and no control scenarios,
showing the results RL algorithm suppress the ball displacement (liquid slosh) by
approximately 45 %. The study validates reinforcement learning as a viable paradigm for
real-time liquid slosh suppression in robotics, offering superior robustness and scalability.
Contributions include an AI-driven control framework for slosh suppression, integrationxvii
of surrogate modeling with DRL for real-time feasibility, and a reward design framework
encoding domain-specific stability objectives. For Further Future working it is
recommended to include extending to real liquid models and hardware validation, adopting
symmetric action spaces and advanced RL methods, and integrating slosh control with
autonomous navigation in dynamic environments.
                         30cm. 
                        
       </p>

<p><a href="http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-reserve.pl?biblionumber=614843">Place Hold on <em>Addressing the Problem of Sloshing in a Liquid Carrying Mobile Robot through Artificial Intelligence /</em></a></p>

						]]></description>
       <guid>http://catalogue.nust.edu.pk:8081/cgi-bin/koha/opac-detail.pl?biblionumber=614843</guid>
     </item>
	 
   </channel>
</rss>





