Embark on a journey into the world of machine learning with the "Machine Learning: Regression" course. This hands-on program, part of the University of Washington's curriculum, delves into the essential concepts and practical applications of regression models.
Throughout this course, you will learn to predict continuous values, such as housing prices, using input features and explore various real-world applications of regression, including predicting health outcomes and stock prices. By delving into regularized linear regression models, learners will gain insights into feature selection and the impact of data aspects, such as outliers, on model performance.
With a focus on optimization algorithms, learners will develop the skills to handle large datasets and compare models of varying complexity. The course also covers the critical aspects of bias and variance in modeling data, the notion of sparsity, and the application of LASSO for forming sparse solutions.
By the end of the course, you will be adept at estimating model parameters, tuning parameters through cross-validation, and analyzing model performance. Implementing these techniques in Python, you will build a regression model to predict prices using a housing dataset, thereby gaining practical experience in applying the learned concepts.
Certificate Available ✔
Get Started / More InfoEmbark on an in-depth exploration of regression models through the "Machine Learning: Regression" course. From understanding the fundamentals to implementing advanced techniques in Python, this course equips learners with the skills to build and analyze regression models for various real-world applications.
Welcome to the introductory module, where you will gain an overview of the course structure and the essential software tools required. The module also provides insights into the assumed background and the community aspect of the learning experience.
Delve into the fundamentals of simple linear regression, exploring the task, model, and optimization objectives. By learning techniques such as gradient descent and the impact of high leverage points, you will lay the foundation for building regression models.
Multiple regression introduces concepts such as polynomial regression and seasonality modeling. With a focus on matrix notation and gradient descent, learners will gain a deeper understanding of implementing multiple regression models for predicting house prices.
Assessing performance module provides insights into training and generalization error, overfitting, and the bias-variance tradeoff. By exploring polynomial regression and understanding the sources of error, learners will gain valuable insights into assessing model performance.
Explore the symptoms of overfitting and the balancing act in ridge regression. With a detailed focus on the ridge objective, coefficient path, and tuning parameters through cross-validation, learners will gain practical experience in implementing ridge regression using gradient descent.
Engage in the feature selection task and understand the complex algorithms for selecting features. You will explore the lasso objective, coordinate descent, and the practical considerations for choosing penalty strength. Implementing LASSO using coordinate descent will provide valuable hands-on experience.
Nearest Neighbors & Kernel Regression module delves into the limitations of parametric regression and explores the approach of k-nearest neighbors regression. By predicting house prices using k-nearest neighbors regression, learners will gain practical insights into the nuances of non-parametric regression techniques.
Closing Remarks module offers a comprehensive recap of the essential concepts covered throughout the course. It provides a consolidated overview of simple and multiple regression, assessing performance, ridge regression, feature selection, lasso, and nearest neighbor regression.
Deep Learning for Healthcare offers a comprehensive exploration of modern computer science methods applied to medical scenarios, encompassing health data analysis,...
This course explores decision trees, support-vector machines, and artificial neural networks, providing essential skills for solving regression, classification,...
Reinforcement Learning is a crucial subfield of Machine Learning, teaching statistical learning techniques and decision-making algorithms. This course introduces...
Learn to predict Titanic survivors using logistic regression and naïve bayes classifiers in this 1-hour guided project-based course.