Robot Control Simulations
Last updated: Jan 25, 2022
GitHub
Source files are available in the GitHub repository.
This page summarizes the main ideas, detailed treatments are in the PDFs linked under the section titles.
Three different control methods have been formulated and simulated.
State Observer & Feedback Control
A linear state observer is designed, specifically a Luenberger observer of the form: $$ \begin{dcases} \dot{\hat{\mathrm{x}}}=\mathrm{A} \hat{\mathrm{x}}+\mathrm{B} \mathbf{u}+\mathrm{L}(\mathbf{y}-\hat{\mathbf{y}}) \\ \hat{\mathbf{y}}=\mathrm{Cx} \end{dcases} $$ The observer gain matrix $\mathrm{L}$ is to be designed. The state matrix $\mathrm{A}$, input matrix $\mathrm{B}$, and output matrix $\mathrm{C}$ come from the LTI system, $$ \begin{dcases} \dot{\mathbf{x}}=\mathrm{A} \mathbf{x}+\mathrm{B} \mathbf{u} \\ \mathbf{y}=\mathrm{Cx} \end{dcases} $$ The feedforward matrix $\mathrm{D}$ is assumed null.
A feedback controller is designed to stabilize the system, while the Luenberger observer provides state estimates. The controlled system is simulated and successfully stabilized.
PD & Computed Torque Control
A robotic manipulator dynamical system is designed of the form, $$ \tau = \mathrm{M}(q)\ddot{q} + \mathrm{C}(q, \dot{q})\dot{q} + \mathrm{N}(q,\dot{q}) $$ This is the typical robot dynamics model with a symmetric inertia matrix, Coriolis and centrifugal matrix, and gravity vector.
A PD controller is designed with the following control input, $$ u(t)=\mathrm{M}(q) \ddot{q}_{d}+\mathrm{C}(q, \dot{q}) \dot{q}_{d}+ \mathrm{N}(q, \dot{q})-\mathrm{K}_{\mathrm{v}} \dot{e}-\mathrm{K}_{\mathrm{p}} e $$ In addition, a Computed–Torque Controller extends this with feedback linearization, by reducing or removing the nonlinear effects of the known robot dynamics. The control input is then, $$ u(t)=\mathrm{M}(q) \ddot{q}_{d}+\mathrm{C}(q, \dot{q}) \dot{q}+\mathrm{N}(q, \dot{q})-\mathrm{M}(q)\left(\mathrm{K}_{\mathrm{v}} \dot{e}+\mathrm{K}_{\mathrm{p}} e\right) $$ Both controlled systems are simulated and successfully stabilized.
Model–Reference Adaptive Control
A system $\dot{x} = a x + b u + c$ with desired reference $r(t) = 0$ is used to design its model reference for the MRAC scheme.
The adaptive control law is: $u=\hat{k}_{x} x+\hat{k}_{r} x+\hat{\theta}^{\top} \phi(x)$
These parameter estimates have the following adjustment mechanism, $$ \begin{dcases} \dot{\hat{k}}_{x}=-\gamma_{x} x e \operatorname{sgn}(b) \\ \dot{\hat{k}}_{r}=-\gamma_{r} x r \operatorname{sgn}(b) \\ \dot{\hat{\theta}}=-\gamma_{x} \phi(x) e \operatorname{sgn}(b) \end{dcases} $$ Note that $e$ is the error signal.
The closed system is further formulated, implemented, and then simulated, and is successfully regulated at the desired reference.
References
Ellis, Observers in Control Systems, 2002.
Sciavicco et al., Modelling and Control of Robot Manipulators, 2000.
Nguyen, Model-Reference Adaptive Control, 2018.