This repository is based on Coursera's course "Machine learning" by Prof. Andrew Ng from Stanford University (https://www.coursera.org/learn/machine-learning). In this course, all codes for the exercises should be written in Octave or Matlab. In addition, some codes are partially provided in advance.
Here, I wrote all codes using Python 3 by myself. This repository can be regarded as my study notebook because I added some explanation and did additional analysis.
** Note: Some equations cannot be read on Github. However, they are visible on jupyter notebook environment.
- Linear regression with one and multiple variables
- Gradient descent
- Hypothesis h
- Cost function J
- Learning rate
- Feature normalization
- Normal equation
- Logistic regression
- Regularized logistic regression
- Feature mapping
- Decision boundary
- Multi-class classification
- Regularized logistic regression
- One-vs-all logistic regression
- Neural network
- Handwriting recognition
- Handwriting recognition
- Neural network
- Regularized cost function and gradient for the neural network
- Feedforward computation
- Backpropagation algorithm
- Gradient checking
- Cross validation
- Regularized linear regression
- Cost function
- Hypothesis
- Bias-variance
- Learning curves
- Polynomial regression
- Fearture normalization
- Support vector machine
- Decision boundary
- Linear classification
- C parameter
- Gaussian kernels
- Spam classification
- K-means clustering
- Random initialization
- Image compression
- Principal component analysis (PCA)
- Dimensionality reduction
- Covariance matrix
- Singular value decomposition (SVD)