Kalman filter

Jimmy (xiaoke) Shen
6 min readJul 18, 2022

--

I’d like to put the resources regarding the Kalman filter in this article.

Image is from [6]

Videos

Understand & Code a Kalman Filter [Part 1 Design]

Understanding Kalman Filters, Part 1: Why Use Kalman Filters?

Papers

Tutorials

See [1] and [2]

Kalman Filter from professor Xu

Kalman Filter tutorial by David

Kalman Filter in Chinese[4]

Kalman Filter in Chinese [6]

Quick explanation

From here[5] in Chinese. Please use google translate to get the English version.

假设你有两个传感器,测的是同一个信号。可是它们每次的读数都不太一样,怎么办?取平均。再假设你知道其中贵的那个传感器应该准一些,便宜的那个应该差一些。那有比取平均更好的办法吗?加权平均。怎么加权?假设两个传感器的误差都符合正态分布,假设你知道这两个正态分布的方差,用这两个方差值,(此处省略若干数学公式),你可以得到一个“最优”的权重。接下来,重点来了:假设你只有一个传感器,但是你还有一个数学模型。模型可以帮你算出一个值,但也不是那么准。怎么办?把模型算出来的值,和传感器测出的值,(就像两个传感器那样),取加权平均。OK,最后一点说明:你的模型其实只是一个步长的,也就是说,知道x(k),我可以求x(k+1)。问题是x(k)是多少呢?答案:x(k)就是你上一步卡尔曼滤波得到的、所谓加权平均之后的那个、对x在k时刻的最佳估计值。于是迭代也有了。这就是卡尔曼滤波。(无公式)作者:Kent Zeng
链接:https://www.zhihu.com/question/23971601/answer/26254459
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

Google translation of the above Chines description:

Suppose you have two sensors measuring the same signal. But their readings are not the same every time, what should I do?
Take the average.
Suppose you know that the more expensive sensor should be more accurate and the cheaper one should be worse. Is there a better way than taking the average?
Weighted average.
How to weight? Assuming that the errors of the two sensors conform to a normal distribution, and assuming that you know the variance of the two normal distributions, using these two variance values, (some mathematical formulas are omitted here), you can get an “optimal” Weights.
Next, here comes the point: Suppose you only have one sensor, but you also have a mathematical model. The model can help you figure out a value, but it’s not that accurate. How to do?
Take the value calculated by the model and the value measured by the sensor, (like two sensors), and take the weighted average.
OK, one last note: your model is actually just one step size, that is, knowing x(k), I can find x(k+1). The question is what is x(k)? Answer: x(k) is the best estimate of x at time k after the so-called weighted average obtained by the Kalman filter in the previous step.
So there are iterations.
This is the Kalman filter.
(no formula)
Author: Kent Zeng
Link: https://www.zhihu.com/question/23971601/answer/26254459
Source: Zhihu
Copyright belongs to the author. For commercial reprints, please contact the author for authorization, and for non-commercial reprints, please indicate the source.

Coding

Kalman Filter and python

Kalman Filter and C++

Kalman Filter and opencv

Kalman filter and Go

Another kalman filter and GO

Relationship between PGM, HMM, and LDS

其中KF是LDS的前向算法,LDS的后向算法为KS (not shown in the diagram)

Demo videos for object tracking

OpenCV Demo: Object tracking with Kalman Filter

The related tutorial of above video

/******************************************
* OpenCV Tutorial: Ball Tracking using *
* Kalman Filter *
******************************************/

// Module "core"
#include <opencv2/core/core.hpp>

// Module "highgui"
#include <opencv2/highgui/highgui.hpp>

// Module "imgproc"
#include <opencv2/imgproc/imgproc.hpp>

// Module "video"
#include <opencv2/video/video.hpp>

// Output
#include <iostream>

// Vector
#include <vector>

using namespace std;

// >>>>> Color to be tracked
#define MIN_H_BLUE 200
#define MAX_H_BLUE 300
// <<<<< Color to be tracked


int main()
{
// Camera frame
cv::Mat frame;

// >>>> Kalman Filter
int stateSize = 6;
int measSize = 4;
int contrSize = 0;

unsigned int type = CV_32F;
cv::KalmanFilter kf(stateSize, measSize, contrSize, type);
cv::Mat state(stateSize, 1, type); // [x,y,v_x,v_y,w,h]
cv::Mat meas(measSize, 1, type); // [z_x,z_y,z_w,z_h]
//cv::Mat procNoise(stateSize, 1, type)
// [E_x,E_y,E_v_x,E_v_y,E_w,E_h]

// Transition State Matrix A
// Note: set dT at each processing step!
// [ 1 0 dT 0 0 0 ]
// [ 0 1 0 dT 0 0 ]
// [ 0 0 1 0 0 0 ]
// [ 0 0 0 1 0 0 ]
// [ 0 0 0 0 1 0 ]
// [ 0 0 0 0 0 1 ]
cv::setIdentity(kf.transitionMatrix);

// Measure Matrix H
// [ 1 0 0 0 0 0 ]
// [ 0 1 0 0 0 0 ]
// [ 0 0 0 0 1 0 ]
// [ 0 0 0 0 0 1 ]
kf.measurementMatrix = cv::Mat::zeros(measSize, stateSize, type);
kf.measurementMatrix.at<float>(0) = 1.0f;
kf.measurementMatrix.at<float>(7) = 1.0f;
kf.measurementMatrix.at<float>(16) = 1.0f;
kf.measurementMatrix.at<float>(23) = 1.0f;

// Process Noise Covariance Matrix Q
// [ Ex 0 0 0 0 0 ]
// [ 0 Ey 0 0 0 0 ]
// [ 0 0 Ev_x 0 0 0 ]
// [ 0 0 0 Ev_y 0 0 ]
// [ 0 0 0 0 Ew 0 ]
// [ 0 0 0 0 0 Eh ]
//cv::setIdentity(kf.processNoiseCov, cv::Scalar(1e-2));
kf.processNoiseCov.at<float>(0) = 1e-2;
kf.processNoiseCov.at<float>(7) = 1e-2;
kf.processNoiseCov.at<float>(14) = 5.0f;
kf.processNoiseCov.at<float>(21) = 5.0f;
kf.processNoiseCov.at<float>(28) = 1e-2;
kf.processNoiseCov.at<float>(35) = 1e-2;

// Measures Noise Covariance Matrix R
cv::setIdentity(kf.measurementNoiseCov, cv::Scalar(1e-1));
// <<<< Kalman Filter

// Camera Index
int idx = 0;

// Camera Capture
cv::VideoCapture cap;
// >>>>> Camera Settings
if (!cap.open(idx))
{
cout << "Webcam not connected.\n" << "Please verify\n";
return EXIT_FAILURE;
}

cap.set(CV_CAP_PROP_FRAME_WIDTH, 1024);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 768);
// <<<<< Camera Settings

cout << "\nHit 'q' to exit...\n";

char ch = 0;

double ticks = 0;
bool found = false;

int notFoundCount = 0;

// >>>>> Main loop
while (ch != 'q' && ch != 'Q')
{
double precTick = ticks;
ticks = (double) cv::getTickCount();

double dT = (ticks - precTick) / cv::getTickFrequency(); //seconds

// Frame acquisition
cap >> frame;

cv::Mat res;
frame.copyTo( res );

if (found)
{
// >>>> Matrix A
kf.transitionMatrix.at<float>(2) = dT;
kf.transitionMatrix.at<float>(9) = dT;
// <<<< Matrix A

cout << "dT:" << endl << dT << endl;

state = kf.predict();
cout << "State post:" << endl << state << endl;

cv::Rect predRect;
predRect.width = state.at<float>(4);
predRect.height = state.at<float>(5);
predRect.x = state.at<float>(0) - predRect.width / 2;
predRect.y = state.at<float>(1) - predRect.height / 2;
cv::Point center;
center.x = state.at<float>(0);
center.y = state.at<float>(1);
cv::circle(res, center, 2, CV_RGB(255,0,0), -1);

cv::rectangle(res, predRect, CV_RGB(255,0,0), 2);
}

// >>>>> Noise smoothing
cv::Mat blur;
cv::GaussianBlur(frame, blur, cv::Size(5, 5), 3.0, 3.0);
// <<<<< Noise smoothing

// >>>>> HSV conversion
cv::Mat frmHsv;
cv::cvtColor(blur, frmHsv, CV_BGR2HSV);
// <<<<< HSV conversion

// >>>>> Color Thresholding
// Note: change parameters for different colors
cv::Mat rangeRes = cv::Mat::zeros(frame.size(), CV_8UC1);
cv::inRange(frmHsv, cv::Scalar(MIN_H_BLUE / 2, 100, 80),
cv::Scalar(MAX_H_BLUE / 2, 255, 255), rangeRes);
// <<<<< Color Thresholding

// >>>>> Improving the result
cv::erode(rangeRes, rangeRes, cv::Mat(), cv::Point(-1, -1), 2);
cv::dilate(rangeRes, rangeRes, cv::Mat(), cv::Point(-1, -1), 2);
// <<<<< Improving the result

// Thresholding viewing
cv::imshow("Threshold", rangeRes);

// >>>>> Contours detection
vector<vector<cv::Point> > contours;
cv::findContours(rangeRes, contours, CV_RETR_EXTERNAL,
CV_CHAIN_APPROX_NONE);
// <<<<< Contours detection

// >>>>> Filtering
cout << "Balls found:" << ballsBox.size() << endl;

// >>>>> Detection result
for (size_t i = 0; i < balls.size(); i++)
{
cv::drawContours(res, balls, i, CV_RGB(20,150,20), 1);
cv::rectangle(res, ballsBox[i], CV_RGB(0,255,0), 2);

cv::Point center;
center.x = ballsBox[i].x + ballsBox[i].width / 2;
center.y = ballsBox[i].y + ballsBox[i].height / 2;
cv::circle(res, center, 2, CV_RGB(20,150,20), -1);

stringstream sstr;
sstr << "(" << center.x << "," << center.y << ")";
cv::putText(res, sstr.str(),
cv::Point(center.x + 3, center.y - 3),
cv::FONT_HERSHEY_SIMPLEX, 0.5, CV_RGB(20,150,20), 2);
}
// <<<<< Detection result

// >>>>> Kalman Update
if (balls.size() == 0)
{
notFoundCount++;
cout << "notFoundCount:" << notFoundCount << endl;
if( notFoundCount >= 100 )
{
found = false;
}
/*else
kf.statePost = state;*/
}
else
{
notFoundCount = 0;

meas.at<float>(0) = ballsBox[0].x + ballsBox[0].width / 2;
meas.at<float>(1) = ballsBox[0].y + ballsBox[0].height / 2;
meas.at<float>(2) = (float)ballsBox[0].width;
meas.at<float>(3) = (float)ballsBox[0].height;
if (!found) // First detection!
{
// >>>> Initialization
kf.errorCovPre.at<float>(0) = 1; // px
kf.errorCovPre.at<float>(7) = 1; // px
kf.errorCovPre.at<float>(14) = 1;
kf.errorCovPre.at<float>(21) = 1;
kf.errorCovPre.at<float>(28) = 1; // px
kf.errorCovPre.at<float>(35) = 1; // px

state.at<float>(0) = meas.at<float>(0);
state.at<float>(1) = meas.at<float>(1);
state.at<float>(2) = 0;
state.at<float>(3) = 0;
state.at<float>(4) = meas.at<float>(2);
state.at<float>(5) = meas.at<float>(3);
// <<<< Initialization

kf.statePost = state;

found = true;
}
else
kf.correct(meas); // Kalman Correction

cout << "Measure matrix:" << endl << meas << endl;
}
// <<<<< Kalman Update

// Final result
cv::imshow("Tracking", res);

// User key
ch = cv::waitKey(1);
}
// <<<<< Main loop

return EXIT_SUCCESS;
}

We can simply say that the Kalman filter works in two steps: PREDICTION and UPDATE. The PREDICTION step allows to predict the position of the object knowing its history, the speed of its movements and knowing the equations that identify its movements. The PREDICTION is corrected every time a measure of the state of the object is available, this correction makes the UPDATE step. Unfortunately, the measure is not perfect, each measure has errors, so PREDICTION and UPDATE are weighted using information about measure and prediction errors. From the tutorial

Reference

[1]http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/

[2]https://zhuanlan.zhihu.com/p/39912633

[3]https://david.wf/kalmanfilter/

[4]https://www.jianshu.com/p/d3b1c3d307e0

[5] 如何通俗并尽可能详细地解释卡尔曼滤波? — Kent Zeng的回答 — 知乎 https://www.zhihu.com/question/23971601/answer/26254459

[6]https://blog.csdn.net/varyshare/article/details/95065650

--

--

No responses yet