This is not to be intended as a TensorFlow or Deep Learning class
It's just an introduction to it, to show the Python potentialities
You can install TensorFlow via pip
https://www.tensorflow.org/install/pip?lang=python3
(remember to install tensorflow-gpu
instead of tensorflow
if you have an NVIDIA GPU with CUDA and cuDNN installed)
or compiling it
We added tensorflow
in the environment.yml
file
So you already installed it with Anaconda
In 2017, Google's TensorFlow team decided to support Keras in TensorFlow's core library
Keras is included in TensorFlow 2.0
We will start from a TensoFlow basic example
It build a fully connected Neural Network
With one hidden layer
To classify clothes from MNIST fashion!
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(units=128, activation='relu'),
keras.layers.Dense(units=10, activation='softmax')
])
z = np.arange(-2, 2, .1)
zero = np.zeros(len(z))
y = np.max([zero, z], axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z, y)
ax.set_ylim([-2.0, 2.0])
ax.set_xlim([-2.0, 2.0])
ax.grid(True)
ax.set_xlabel('z')
ax.set_title('Rectified linear unit')
Text(0.5, 1.0, 'Rectified linear unit')
Softmax takes as input a vector of n real numbers, and normalizes it into a probability distribution consisting of n probabilities proportional to the exponentials of the input numbers
prior to applying softmax:
after softmax:
Softmax is often used in neural networks, to map the non-normalized output of a network to a probability distribution over predicted output classes.
Before the model is ready for training, it needs a few more settings. These are added during the model's compile step:
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
The Adam optimization algorithm is an extension to stochastic gradient descent that has recently seen broader adoption for deep learning applications in computer vision and natural language processing.
The method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients.
Adam adapts the learning rates using the averages of the first and second moment of the gradients
Using large models and datasets, we demonstrate Adam can efficiently solve practical deep learning problems.
Training the neural network model requires the following steps:
train_images
and train_labels
arrays.test_images
array. Verify that the predictions match the labels from the test_labels
array.To start training, call the model.fit
method—so called because it "fits" the model to the training data:
epochs=10
history = model.fit(train_images,
train_labels,
epochs=epochs,
validation_data=(validation_images,validation_labels))