[FREE Coursera Course] Introduction to Machine Learning Offered by DUKE University

Introduction to Machine Learning (Offered by DUKE University) is FREE for some days.

Note: You are getting this course for FREE until 4/30/2021. It’s because of celebrating Birthday of Coursera. Your cost will be automatically adjusted to free at checkout. One-time use only. Offer is subject to change and valid through 4/30/2021.

Course Link: https://www.coursera.org/learn/machine-learning-duke?edocomorp=courserabirthday-2021

Make sure you have seen this popup notification on your browser tab. Screenshot is given below.

About this Course

This course will provide you a foundational understanding of machine learning models (logistic regression, multilayer perceptrons, convolutional neural networks, natural language processing, etc.) as well as demonstrate how these models can solve complex problems in a variety of industries, from medical diagnostics to image recognition to text prediction. In addition, we have designed practice exercises that will give you hands-on experience implementing these data science models on data sets. These practice exercises will teach you how to implement machine learning algorithms with PyTorch, open source libraries used by leading tech companies in the machine learning field (e.g., Google, NVIDIA, CocaCola, eBay, Snapchat, Uber and many more).

Syllabus — What you will learn from this course

1. Simple Introduction to Machine Learning

The focus of this module is to introduce the concepts of machine learning with as little mathematics as possible. We will introduce basic concepts in machine learning, including logistic regression, a simple but widely employed machine learning (ML) method. Also covered is multilayered perceptron (MLP), a fundamental neural network. The concept of deep learning is discussed, and also related to simpler models.

2. Basics of Model Learning

In this module we will be discussing the mathematical basis of learning deep networks. We’ll first work through how we define the issue of learning deep networks as a minimization problem of a mathematical function. After defining our mathematical goal, we will introduce validation methods to estimate real-world performance of the learned deep networks. We will then discuss how gradient descent, a classical technique in optimization, can be used to achieve this mathematical goal. Finally, we will discuss both why and how stochastic gradient descent is used in practice to learn deep networks.

3. Image Analysis with Convolutional Neural Networks

This week will cover model training, as well as transfer learning and fine-tuning. In addition to learning the fundamentals of a CNN and how it is applied, careful discussion is provided on the intuition of the CNN, with the goal of providing a conceptual understanding.

4. Recurrent Neural Networks for Natural Language Processing

This week will cover the application of neural networks to natural language processing (NLP), from simple neural models to the more complex. The fundamental concept of word embeddings is discussed, as well as how such methods are employed within model learning and usage for several NLP applications. A wide range of neural NLP models are also discussed, including recurrent neural networks, and specifically long short-term memory (LSTM) models.

Course Link: https://www.coursera.org/learn/machine-learning-duke?edocomorp=courserabirthday-2021

DISCLOSURE: This post contains affiliate links, meaning when you click the links and make a purchase, receive a commission.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store