This course introduces aspiring data scientists to the main types of modeling families of supervised Machine Learning: Classification. The comprehensive curriculum includes training predictive models to classify categorical outcomes and using error metrics to compare across different models. The hands-on section focuses on best practices for classification, including train and test splits, and handling data sets with unbalanced classes.
Upon completion, learners will be able to:
Prerequisites for this course include familiarity with programming on a Python development environment, as well as a fundamental understanding of Data Cleaning, Exploratory Data Analysis, Calculus, Linear Algebra, Probability, and Statistics.
Certificate Available ✔
Get Started / More InfoSupervised Machine Learning: Classification covers essential topics including logistic regression, K Nearest Neighbors, support vector machines, decision trees, ensemble models, and modeling unbalanced classes. Gain hands-on experience in classifying categorical outcomes and comparing different models based on error metrics.
This module introduces learners to logistic regression, a fundamental concept in supervised machine learning for classification. Topics covered include classification error metrics, implementing logistic regression models, and hands-on labs for practical application.
Explore K Nearest Neighbors as a classification technique, covering decision boundary, distance measurement, feature scaling, and practical applications through notebooks and lab exercises. Gain an in-depth understanding of the pros and cons of K Nearest Neighbors for classification.
Gain insight into Support Vector Machines, including classification, cost function, regularization, Gaussian kernels, and implementing kernel models. Utilize notebooks and lab exercises to deepen your understanding of Support Vector Machines for classification.
Delve into the world of decision trees, covering the overview of classifiers, entropy-based splitting, pros and cons, and practical application through notebooks and lab exercises. Gain expertise in building and analyzing decision trees for classification.
This module covers ensemble models, including bagging, random forest, boosting, stacking, and practical application through demo and practice labs. Gain hands-on experience in using ensemble methods for classification and understand their role in improving model performance.
Explore modeling unbalanced classes, including model interpretability, examples of self-interpretable and non-self-interpretable models, and various modeling approaches for handling unbalanced classes. Engage in practical exercises to deepen your understanding of modeling imbalanced classes in a data set.
AI for Medicine is a three-course Specialization that provides practical experience in applying machine learning to concrete problems in medicine, such as diagnosing...
Aprende a diseñar y entrenar redes neuronales convolucionales básicas utilizando Pytorch en este proyecto de 1 hora.
Explore ML pipelines on Google Cloud, including TFX, Kubeflow, and MLflow. Learn to orchestrate, automate, and manage ML pipelines across various frameworks.
Learn how to implement a Support Vector Machine algorithm for classification in Python, building your own SVM model with amazing visualization.