2. Learning Models

The process of training an ML model involves providing an ML algorithm (that is, the learning algorithm) with training data to learn from. The term ML model refers to the model artifact that is created by the training process.

The training data must contain the correct answer, which is known as a target or target attribute. The learning algorithm finds patterns in the training data that map the input data attributes to the target (the answer that you want to predict), and it outputs an ML model that captures these patterns. You can then use the ML model to get predictions on new data for which you do not know the target.

Organizing machine learning algorithms is useful because it forces you to think about the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result. Algorithms are often grouped by similarity in terms of their function.

2.1. Regression Algorithms

Regression is concerned with modeling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model. Regression methods are a workhorse of statistics and have been co-opted into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.

  1. Ordinary Least Squares Regression (OLSR)
  2. Linear Regression
  3. Logistic Regression
  4. Stepwise Regression
  5. Multivariate Adaptive Regression Splines (MARS)
  6. Locally Estimated Scatterplot Smoothing (LOESS)
surpervised learning

2.2. Instance-based Algorithms

Instance-based learning model is a decision problem with instances or examples of training data that are deemed important or required to the model. Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on the representation of the stored instances and similarity measures used between instances.

  1. k-Nearest Neighbor (kNN)
  2. Learning Vector Quantization (LVQ)
  3. Self-Organizing Map (SOM)
  4. Locally Weighted Learning (LWL)
surpervised learning

2.3. Regularization Algorithms

An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing. regularization algorithms

  1. Ridge Regression
  2. Least Absolute Shrinkage and Selection Operator (LASSO)
  3. Elastic Net
  4. Least-Angle Regression (LARS)
surpervised learning

2.4. Decision Tree Algorithms

Decision tree methods construct a model of decisions made based on actual values of attributes in the data. Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

  1. Classification and Regression Tree (CART)
  2. Iterative Dichotomiser 3 (ID3)
  3. C4.5 and C5.0 (different versions of a powerful approach)
  4. Chi-squared Automatic Interaction Detection (CHAID)
  5. Decision Stump
  6. M5
  7. Conditional Decision Trees
surpervised learning

2.5. Bayesian Algorithms

Bayesian methods are those that explicitly apply Bayes’ Theorem for problems such as classification and regression. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability of the response variable belonging to a particular value.

  1. Naive Bayes
  2. Gaussian Naive Bayes
  3. Multinomial Naive Bayes
  4. Averaged One-Dependence Estimators (AODE)
  5. Bayesian Belief Network (BBN)
  6. Bayesian Network (BN)
surpervised learning

2.6. Clustering Algorithms

Clustering, like regression, describes the class of problem and the class of methods. Clustering methods are typically organized by the modeling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

  1. k-Means
  2. k-Medians
  3. Expectation Maximisation (EM)
  4. Hierarchical Clustering
surpervised learning

2.7. Association Rule Learning Algorithms

Association rule learning methods extract rules that best explain observed relationships between variables in data. These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organization.

  1. Apriori algorithm
  2. Eclat algorithm
surpervised learning

2.8. Dimensionality Reduction Algorithms

Like clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information. This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Many of these methods can be adapted for use in classification and regression.

  1. Principal Component Analysis (PCA)
  2. Principal Component Regression (PCR)
  3. Partial Least Squares Regression (PLSR)
  4. Sammon Mapping
  5. Multidimensional Scaling (MDS)
  6. Projection Pursuit
  7. Linear Discriminant Analysis (LDA)
  8. Mixture Discriminant Analysis (MDA)
  9. Quadratic Discriminant Analysis (QDA)
  10. Flexible Discriminant Analysis (FDA)
surpervised learning

2.9. Ensemble Algorithms

Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction. Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

  1. Boosting
  2. Bootstrapped Aggregation (Bagging)
  3. AdaBoost
  4. Stacked Generalization (blending)
  5. Gradient Boosting Machines (GBM)
  6. Gradient Boosted Regression Trees (GBRT)
  7. Random Forest
surpervised learning

Citations

Footnotes

References

  1. Machine Learning Algorithms