This repository presents the basic notions that involve the concept of Machine Learning and Deep Learning.
Read more in this post ML & DL — Machine Learning and Deep Learning 101.
- Create the conda environment
(base)$: conda env create -f environment.yml
Mac OS users could use the
environment_ios.yml
file for configuring the iOS development environment.
- Activate the environment
(base)$: conda activate deep_learning_101
- Run!
(deep_learning_101)$: python -m jupyter notebook
The models include a brief theoretical introduction and practical implementations developed using Python and Keras/TensorFlow in Jupyter Notebooks.
The development environment that will be used as one of the primary frameworks for machine learning and deep learning, alongside Python programming, is the Jupyter Notebook environment.
Load data (training and testing set):
import tensorflow as tf
X_train, y_train = tf.keras.datasets.mnist.load_data()
X_test, y_test = tf.keras.datasets.mnist.load_data()
Two models: Sequential and Functional API.
Sequential used to stack layers:
model.add()
used to add the layers.input_shape =()
specify the input form.
model = tf.keras.models.Sequential()
model.add(layer1 …, input_shape=(nFeatures))
model.add(layer2 … )
Configure the learning process by specifying:
optimizer
which determines how weights are updated,- Cost function or
loss
function, metrics
to evaluate during training and testing.
model.compile(optimizer='SGD', loss='mse', metrics=['accuracy'])
Start the training process.
batch_size
: divide the data set into a number of batches.epochs
: number of times the data set is trained completely.
model.fit(X_train, y_train, batch_size=500, epochs=1)
Evaluate the performance of the model.
model.evaluate()
finds the specified loss and metrics, and it provides a quantitative measure of accuracy.model.predict()
finds the output for the provided test data and it is useful to check the outputs qualitatively.
history = model.evaluate(X_test, y_test)
y_pred = model.predict(X_test)
Model | Architecture | Activation | Parameters | Accuracy |
---|---|---|---|---|
LogReg | -- | -- | 7850 | 0.9282 |
ANN | [32] | [sigmoid] | 25450 | 0.9636 |
DNN | [128, 64] | [relu, relu] | 25450 | 0.9801 |
CNN | [32, 64, 128] | [relu, relu, relu] | 25450 | 0.9898 |
Model | Target | Hypothesis | Cost |
---|---|---|---|
LinReg | Continuous | MSE | |
LogReg | Categorical | Cross-Entropy | |
ANN | Continuous, Categorical | MSE, Cross-entropy | |
DNN | Continuous, Categorical | MSE, Cross-entropy | |
CNN | Continuous, Categorical | MSE, Cross-entropy |
Theoretical introduction (https://mafda.medium.com):
- ML & DL — Linear Regression (Part 2)
- ML & DL — Logistic Regression (Part 3)
- ML & DL — Artificial Neural Networks (Part 4)
- ML & DL — Deep Neural Networks (Part 5)
- ML & DL — Convolutional Neural Networks (Part 6)
- Linear Regression
- Logistic Regression
- Artificial Neural Networks
- Deep Neural Networks
- Convolutional Neural Networks
-
Complete Post Medium
-
Book
made with 💙 by mafda