Kalman Filter RLC



RLC

The RLC circuit above can be expressed by the following equation

$$ \ddot{I}_{L} = -\frac{\dot{I}_{L}}{RC} -\frac{I_L}{LC} + \frac{I}{LC} $$

The equation can be expressed as two first order differential equations using the following method.

$$ x_{1} = I_{L} $$

$$ x_{2} = \dot{I}_{L}$$

so
$$ \dot{x}_{2} = \ddot{I}_{L} = -\frac{\dot{I}_{L}}{RC} -\frac{I_{L}}{LC} + \frac{I}{LC} $$

and

$$\dot{x}_{1} =\dot{I}_{L} = x_{2}$$

Replacing variables with state variables $\bf{X}$

$$ \dot{x}_{2} = - \frac{x_{2}}{RC} - \frac{x_{1}}{LC} + \frac{I}{LC} $$
Using the formula below the derivative can be discretised. $$ \dot{x} = \frac{x_{k+1} - x_k }{\Delta t} $$

$$ \frac{x_{1_{k+1}} - x_{1_{k}}} {\Delta t} = x_{2_{k}}$$

rearranging gives the future value for $x_{1}$

$$ x_{1_{k+1}} = \Delta t x_{2_{k}} + x_{1_{k}} $$

The same method is then applied to $x_{2}$

$$ \frac{x_{2_{k+1}}- x_{2_k}}{\Delta t} = -\frac{x_{1_k}} {LC} -\frac{x_{2_{k}}}{RC} + \frac{I_k}{LC} $$

Giving the future value for $x_{2}$

$$ x_{2_{k+1}} = -\frac{\Delta t x_{1_k}} {LC} - x_{2_{k}}\left (1- \frac{\Delta t}{RC} \right) + \frac{\Delta t I_k}{LC} $$

The two equations can now be expressed in state space formalism. I have shifted the time step by -1 so $x_{1_{k+1}} $ is now $x_{1_{k}}$. This is done to maintain accordance with the kalman filter framework as shown below by the state transition model.

$$ \hat{X}_k = F_k X_{k-1} + B_{k} U_{k} $$

where

From here on, a hat above a variable indicates a predicted state and a tilde is a measured variable.

\begin{align} \newcommand{\xoverbrace}[2][\vphantom{\dfrac{A}{A}}]{\overbrace{#1#2}} \newcommand{\pder}[2]{\frac{\partial#1}{\partial#2}} \overbrace{ \begin{bmatrix} \hat{x}_{1_{k}}\\ \hat{x}_{2_{k}} \end{bmatrix} }^\mathbf{\hat{X}_k} = \overbrace{ \begin{bmatrix} 1 & \Delta t \\ -\frac{\Delta t}{LC} & 1 - \frac{\Delta t}{RC} \end{bmatrix} }^\mathbf{F_k} \: \overbrace{ \begin{bmatrix} x_{1_{k-1}}\\ x_{2_{k-1}} \end{bmatrix} }^\mathbf{\hat{{X}}_{k-1}} + \overbrace{ \begin{bmatrix} 0\\ \frac{\Delta t}{LC} \end{bmatrix} }^{\mathbf{B_{k}}} \: \xoverbrace{ I_{k-1} }^\mathbf{U_{k}} \end{align}

The voltage induced across an inductor is equal to the rate of change of current passing through it multiplied by its inductance.

$$ \tilde{v}(t) = L\dot{I}_{L} $$

The correction matrix takes the predicted value and adjusts it based on a measurement. How much the measurement is incorporated into the prediction is dependent on the Kalman gain.

$$ \begin{bmatrix} x_{1_{k}}\\ x_{2_{k}} \end{bmatrix} = \overbrace{ \begin{bmatrix} \hat{x}_{1_{k}}\\ \hat{x}_{2_{k}} \end{bmatrix} }^\mathbf{\hat{X}} + K \left( \tilde{v}_k - \overbrace{ \begin{bmatrix} 0 & L \end{bmatrix} }^\textbf{H} \begin{bmatrix} \hat{x}_{1_{k}}\\ \hat{x}_{2_{k}} \end{bmatrix} \right ) $$

This is all that is needed in terms of design. The rest of the algorithm follows the formula below taken from Wikipedia.

RLC

Results

RLC

RLC

RLC

Python Implementation

The implementation is made in a jupyter notebook which can be downloaded here. For ease of viewing I have pasted the code segments here - Kalman RLC Notebook

Requirements

%matplotlib qt
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import inv
import copy

Transition Model

$$ \hat{X}_k = F_k X_{k-1} + B_{k} U_{k} $$
def prediction(Il,Ildot,I,t):
    
    L = 5
    C = 500e-9
    R = 50e3
    
    A = np.array([[1, t],
                  [-t/(L*C), 1 - (t/(R*C))]])
    
    X = np.array([[Il],
                 [Ildot]])
    
    B = np.array([[0],
                  [t/(L*C)]])
    
    U = I
    
    X_prediction = A.dot(X) + B.dot(U)

    
    return X_prediction

#time 
T = 0.14
t = np.linspace(0,T,100e3*T)

#initialise inductor current and its derivertive
Il_true = np.zeros(len(t))
Ildot_true = np.zeros(len(t))

#cicuit component definitions
L = 5
C = 500e-9
R = 50e3
#create input signal 
wr = 1/np.sqrt(L*C)

I_true = 25e-6 * np.sin(wr * t)

#time step
dt = t[1] - t[0]

#create True data 
I = np.zeros((2,len(t)))

for i in range(1,len(t)):
    a =  prediction(I[0][i-1],I[1][i-1],I_true[i-1],dt).T
    I[:,i] = a

Il_true = I[0]
Ildot_true = I[1]
    
V_true = L*Ildot_true
noise = np.random.normal(loc=0, scale=0.5, size=len(t))
V_meas = V_true + noise
#state transition model 
A = np.array([[1, dt],
              [-dt/(L*C), (1 - (dt/(R*C)))]],dtype='float')

#process noise(model noise)
Q = np.array([[4e-9, 0],
              [0, 4e-9]])

#measurement noise
R = np.array([[4]])

#Initial Predicted (a priori) estimate
P=Q
#store values for plotting
Il_a = np.zeros([len(t)])
Ildot_a = np.zeros([len(t)])

#initial values
Il_a[0]    = 0
Ildot_a[0] = 0

#set the input matrix to be the true input
I_a = I_true

# for i in range(1,len(t)):
for i in range(len(t)):
    #prediction
    X_hat = prediction(Il_a[i-1], Ildot_a[i-1], I_a[i-1],dt)

    #predicted estimate covariance
    P_hat = np.diag(np.diag(A.dot(P).dot(A.T))) + Q

    #observation matrix v(t) = Ldi/dt
    H = np.array([[0,L]])
    
    #innovation covariance
    S = H.dot(P_hat).dot(H.T) + R
    
    # Kalman Gain
    K = P_hat.dot(H.T).dot(inv(S))

    #measurement - prediction 
    y = V_meas[i] - H.dot(X_hat)   

    #updated state estimate
    X = X_hat + K.dot(y)
    
    #Updated (a posteriori) estimate covariance
    P = (np.identity(2) - K.dot(H)).dot(P_hat)
    
    #update values
    Il_a[i]    = X[0]
    Ildot_a[i] = X[1]
comments powered by Disqus