Introduction
Control theory plays a pivotal role in modern systems, ranging from electronics to transportation and industry. In this series, we’ll explore the fundamentals of how we can effectively control systems to achieve desired behaviors.
Definitions
Throughout this series, we will delve into methods that ensure systems behave in a desirable manner.
In a typical system, we encounter:
- $v(t)$ - Interference
- $u(t)$ - Control signal/Input signal
- $y(t)$ - Response signal/Output signal
Control
Control involves producing the output signal solely from the control signal, without any feedback or compensation for disturbances.
Regulate
Regulation is the process of controlling a system based on its output signal. It employs feedback to monitor and adjust the system’s behavior, ensuring that the output aligns with the desired goal.
System
A system is a device where physical outputs (signals) are determined by other physical inputs (like control signals).
Examples
- Cruise control in a car
- Level control in a tank
- Temperature control
Classification of systems
Systems can be broadly categorized into two types:
Static systems
In static systems, the current output solely depends on the present input. Such systems have no memory of past inputs.
Example of a static system
- $u(t)$ - Input voltage (control signal)
- $y(t)$ - Output voltage (response signal)
- $R_1, R_2$ - Resistors
The relationship between input and output voltages is given by: $$ y(t) = \dfrac{R_2}{R_1 + R_2} u(t) $$
This shows that the output relies only on the current input.
Dynamic systems
Dynamic systems remember past inputs, and their current output is influenced by these past values.
Example of a dynamic system
- $u(t)$ - Input voltage (control signal)
- $y(t)$ - Output voltage (response signal)
- $R$ - Resistor
- $C$ - Capacitor
The current relationship is: $$ i(t) = C \dfrac{dy(t)}{dt} $$
The output voltage is expressed as: $$ y(t) = \dfrac{1}{C} \int_0^t i(\tau)\ d\tau $$
Using Kirchhoff’s Current Law (KCL): $$ i(t) = \dfrac{u(t) - y(t)}{R} $$
Rearranging: $$ y(t) = u(t) - R\ i(t) $$
Using our definition for $i(t)$: $$ y(t) = u(t) - RC\ \dfrac{dy(t)}{dt} $$
Let’s denote $\dfrac{dy(t)}{dt}$ as $\dot{y(t)}$. $$ y(t) = u(t) - RC\ \dot{y(t)} $$
Which means: $$ \dot{y(t)} + \dfrac{y(t)}{RC} = \dfrac{u(t)}{RC} $$
This is an Ordinary Differential Equation (ODE), which is linear. Understanding ODEs is essential in control theory as they frequently describe the behavior of control systems.
Systems with a control device
r(t) -> Control function - u(t) > Control Device -> v(t) (Interference) Process -> y(t)
- $r(t)$ - reference/setpoint value
- $y(t)$ - output/actual value
The control function provides the control device with a control signal so that y(t) approximates r(t).
Examples
- Valve affecting a flow.
- Heating element affecting temperature.
Signals
- Electrical signals
- Pneumatic signals
- Flow, volume, level
- Torque
Sensors
Sensors play a crucial role by converting non-electrical signals into electrical signals, enabling easy measurement and control.
Different types of systems
Open systems
In open systems, the control function provides the control signal based on its understanding of the process to obtain the desired output. However, they cannot adjust to disturbances, which can be a limitation.
Feedback systems
Here, the control signal, $u(t)$, is determined by measuring the actual output, $y(t)$. It has the advantage of adjusting its behavior based on feedback, ensuring the system stays on track. Control error, $e(t)$, represents the difference between the desired and actual output. Minimizing this error is a primary objective in control theory.