Get in Touch

Course Outline

Introduction

This section offers a broad overview of when to apply 'machine learning', key considerations, and its fundamental meaning, including advantages and disadvantages. Topics include data types (structured/unstructured, static/streamed), data integrity and volume, data-driven versus user-driven analytics, and the distinction between statistical models and machine learning models. It also covers challenges in unsupervised learning, the bias-variance trade-off, iteration and evaluation processes, cross-validation techniques, and the differences between supervised, unsupervised, and reinforcement learning.

MAJOR TOPICS

1. Grasping Naive Bayes

  • Core principles of Bayesian methods
  • Probability theory
  • Joint probability
  • Conditional probability via Bayes' theorem
  • The Naive Bayes algorithm
  • Naive Bayes classification
  • The Laplace estimator
  • Applying numeric features with Naive Bayes

2. Mastering Decision Trees

  • Divide and conquer strategy
  • The C5.0 decision tree algorithm
  • Selecting the optimal split
  • Pruning the decision tree

3. Comprehending Neural Networks

  • Transition from biological to artificial neurons
  • Activation functions
  • Network topology
  • Determining the number of layers
  • Direction of information flow
  • Node count per layer
  • Training neural networks using backpropagation
  • Deep Learning

4. Exploring Support Vector Machines

  • Classification using hyperplanes
  • Identifying the maximum margin
  • Handling linearly separable data
  • Handling non-linearly separable data
  • Utilizing kernels for non-linear spaces

5. Analyzing Clustering

  • Clustering as a machine learning task
  • The k-means algorithm for clustering
  • Using distance metrics for cluster assignment and updates
  • Selecting the appropriate number of clusters

6. Evaluating Performance for Classification

  • Handling classification prediction data
  • Examining confusion matrices in detail
  • Using confusion matrices to assess performance
  • Beyond accuracy – alternative performance metrics
  • The kappa statistic
  • Sensitivity and specificity
  • Precision and recall
  • The F-measure
  • Visualizing performance trade-offs
  • ROC curves
  • Estimating future performance
  • The holdout method
  • Cross-validation
  • Bootstrap sampling

7. Optimizing Stock Models for Enhanced Performance

  • Using caret for automated parameter tuning
  • Constructing a simple tuned model
  • Customizing the tuning process
  • Boosting model performance with meta-learning
  • Understanding ensemble methods
  • Bagging
  • Boosting
  • Random forests
  • Training random forests
  • Evaluating random forest performance

MINOR TOPICS

8. Learning Classification via Nearest Neighbors

  • The kNN algorithm
  • Calculating distance
  • Choosing an appropriate k value
  • Preparing data for kNN usage
  • Why the kNN algorithm is considered lazy?

9. Exploring Classification Rules

  • Separate and conquer approach
  • The One Rule algorithm
  • The RIPPER algorithm
  • Deriving rules from decision trees

10. Comprehending Regression

  • Simple linear regression
  • Ordinary least squares estimation
  • Correlations
  • Multiple linear regression

11. Understanding Regression Trees and Model Trees

  • Incorporating regression into trees

12. Grasping Association Rules

  • The Apriori algorithm for association rule learning
  • Measuring rule interest – support and confidence
  • Constructing a set of rules using the Apriori principle

Extras

  • Spark/PySpark/MLlib and Multi-armed bandits

Requirements

Knowledge of Python

 21 Hours

Number of participants


Price per participant

Testimonials (7)

Upcoming Courses

Related Categories