Most of the machine learning, Deep Learning practitioners are busy in applying pre-defined ML algorithms without taking any effort of knowing what is actually happening behind the algorithms. So, I’ve decided to share some short but important note related to Machine Learning.
P.S. – I will keep updating this list
Models learning outliers from the data is called overfitting. In this phenomenon a model matches the training data almost perfectly, but does poorly in validation and other new data.
When a model fails to capture important distinctions and patterns in the data, so it performs poorly even in training data, that is called under-fitting.
Model tries to make an assumption to learn the target variable is called Bias Error. Linear models like linear regression, logistic regression has high bias. Non linear model has low bias.
Variance error is change in target function while working on different data. Non linear models like decision tree, support vector machine has high variance. Linear model has low variance.
Bias – Variance Tradeoff
Good model is achieve by having low bias and low variance to avoid over-fitting and under-fitting.
As you increase the bias, variance decreases. And as you try to increase variance, bias decreases. So, maintaining the balance in machine learning is very difficult, this is Bias-Variance Tradeoff.
Correlation is dependency of one data point on another. Better the correlation better your model will perform, hence it is very important to check correlation of the data variable before any operation. There are two types of correlation, positive and negative. Positive correlation means variable moving in same direction and negative correlation is, when one type of variable increases, other decreases
This is the another important concept in Machine Learning. Which is used to check efficiency of your model. Read more …
© Copyright 2020 Capable Machine
Created by Sarang Deshmukh