Image Processing & Computer Vision Video Series

Hallo teman-teman semuanya! 

Di post ini, saya akan membagikan video series yang akan membahas materi image processing dan computer vision. Video series ini akan membahas detail teorinya, kemudian di sesi berikutnya diikuti dengan praktek implementasi di pemrograman C++ menggunakan libary OpenCV.

Materi diadopsi dari bukunya Gonzales “Digital Image Processing” dari Chapter 1 hingga Chapter 12, yaitu Object Detection, ditambah dengan beberapa topik computer vision, seperti stereo vision. Adapun slide dan code-nya tersedia di link Github di sini. Slide dan code tersebut dapat digunakan secara gratis, baik untuk referensi perkuliahan hingga untuk self-study, dengan tetap mencantumkan sumbernya.

Semoga video series ini membantu teman-teman dalam memahami image processing dan computer vision, dan ikut membantu tumbuhnya iklim science & engineering di Indonesia. Salam 🙂

Continue reading “Image Processing & Computer Vision Video Series”

Machine Learning from The Scratch using Python

This post provides video series how we can implement machine learning algorithm from the scratch using python. Up to know, the video series consist of clustering methods, and will be continued for regression, classification and pre-processing methods, such as PCA. Check this out!

*this is in playlist mode. So, you can check other videos in the playlist navigation.

Continue reading “Machine Learning from The Scratch using Python”

Understanding How Mask RCNN Works for Semactic Segmentation

Mask RCNN is extension of Faster RCNN. In 2017, this is the state-of-the-art method for object detection, semantic segmentation and human pose estimation. This awesome research is done by Facebook AI Research.  This post provides video series talking about how Mask RCNN works, in paper review style. May it helps.

1. Introduction to MNC, FCIS ad Mask RCNN for Instance Aware Semantic Segmentation

Continue reading “Understanding How Mask RCNN Works for Semactic Segmentation”

Understanding Faster R-CNN for Object Detection

Faster R-CNN is important research in object detection. It inspires many other methods how we can do object detection using deep learning, such as YOLO, SSD (Single Shot Detector) and so on. This post provides video series of how Faster RCNN works. The video series is made in paper review style. Hope it helps 🙂

1. Introduction to Faster R-CNN

Continue reading “Understanding Faster R-CNN for Object Detection”

Understanding Kernel Method/Tricks in Machine Learning

Up to now, we already learn about regression, classification and clustering in our machine learning and pattern recognition post series. During this post, we will learn another powerful method in machine learning, which is kernel method, or also called kernel trick! Why do we use kernel trick? Some reasons are (1) we don’t need to think how to form a design matrix. Just imagine for example if our features are “words”, not number. How can we form design matrix for it? Using kernel method, we can just define our kernel, for example using hamming distance of our “words”, etc. (2) Kernel method provides us a way to project our data into much higher dimensional space, even equals to infinite dimensional space. We can take benefit of it so that our model performs better.

So, how to do kernel trick? We will demonstrate how to do that in our regularized regression. In our regularized regression using LSE we already talk here, we get the loss function to be minimized as follows.

J(\textbf{a})=\frac{1}{2m}[(\textbf{Xa}-\boldsymbol{y})^2+\lambda \textbf{a}^T\textbf{a}]\\\\  J(\textbf{a})=\frac{1}{2m}[(\textbf{Xa})^T\textbf{Xa}-2\textbf{Xa}^T\textbf{y}+\textbf{y}^T\textbf{y}+\lambda \textbf{a}^T\textbf{a}] Continue reading “Understanding Kernel Method/Tricks in Machine Learning”

Estimator for Mean and Variance of Sampled Data

Estimator is a statistic, usually in a function of the data, that is used to infer the value of an unknown parameter in a statistical model. During this post, we will talk about estimator for mean and variance of sampled data. We can determine a good estimator by calculating the bias of it. A good estimator should give bias closed to zero. Let \theta is parameter we want to estimate/observe, our estimator result will be \hat{\theta}. The bias of our estimator is defined as follows.

bias = E[\hat{\theta}]-\theta

We will use bias formula above to check whether our estimator is good or not. And during this post, we will check our estimator we already derived by MLE here, which are mean and variance. Let’s write them first.

\mu_{MLE}=\hat{\mu}=\frac{1}{n}\sum_{i=1}^{n}x_i\\\\  \sigma^2_{MLE}=\hat{\sigma}^2=\frac{1}{n}\sum_{i=1}^{n}(x_i-\hat{x})^2

where \bar{x}=\mu and \hat{x}=\hat{\mu} Continue reading “Estimator for Mean and Variance of Sampled Data”